Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support custom events for metrics update #874

Closed
snie2012 opened this issue Apr 3, 2020 · 13 comments · Fixed by #979
Closed

Support custom events for metrics update #874

snie2012 opened this issue Apr 3, 2020 · 13 comments · Fixed by #979
Assignees
Projects

Comments

@snie2012
Copy link

snie2012 commented Apr 3, 2020

Currently for ignite metrics, update function is called per iteration. In scenarios like multitasking, there can be metrics for different tasks, and each iteration might only have outputs for a subset of tasks. In this situation, only the metrics associated with those tasks having outputs can/should be updated.

To make it work, I need to overwrite the attach function like this:

class CustomRnningAverage(RunningAverage):
    def attach(self, engine, name, custom_event=None):
        if custom_event is None:
            super().attach(engine, name)
        else:
            if self.epoch_bound:
                # restart average every epoch
                engine.add_event_handler(Events.EPOCH_STARTED, self.started)
            # compute metric
            engine.add_event_handler(custom_event, self.iteration_completed)
            # apply running average
            engine.add_event_handler(custom_event, self.completed, name)

This is doable, but would be nice to have it by default in ignite. Also, this manual overwrite doesn't work with DDP. The cause is related to _internal_attach in metrics_lambda.py.

I would like to propose

  1. Have update every iteration by default
  2. Support overwrite when to update by accepting custom events
  3. Make it work with DDP

Multitasking is one case where this can be very helpful. But I am sure there will be other use cases.

@sdesrozis
Copy link
Contributor

Hey, thank you for the suggestion !
To be sure to well understand, could you give pseudo code of your update ?

To add some inputs maybe useful, did you check event filtering ?
https://github.com/pytorch/ignite#built-in-events-filtering

The last thing which could help is the detachment of metrics

metric = ...
engine = ...
metric.detach(engine)
assert "mymetric" not in engine.run(data).metrics
assert not metric.is_attached(engine)

@snie2012
Copy link
Author

snie2012 commented Apr 3, 2020

@sdesrozis Thanks for your quick response!

  1. For update, I am referring to the build in update functions in metrics such as this one for Precision: https://github.com/pytorch/ignite/blob/master/ignite/metrics/precision.py#L136

  2. Does event filtering only support filtering on frequency? If that's the case, it won't work, since metrics' updates are irregular

  3. I wouldn't vote for detach. It's the last resort if everything else doesn't work out.

Overall, the reason I bring this up to a feature proposal is because

  1. It won't break the default use case, which is to update every iteration
  2. Easy to add
  3. Makes metrics more flexible and can potentially be useful in a lot of cases

@sdesrozis
Copy link
Contributor

sdesrozis commented Apr 3, 2020

  1. you can filter on what you want and do irregular firing
iterations = [0,12, 77]

def my_filter(engine, event):
    return engine.state.iteration in iterations

@engine.on(Events.ITERATION_COMPLETED(event_filter=my_filter))
def my_function(_):
    # do what I want

Btw, I don't know if this could help you. Could you give me a feedback about it?

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Apr 3, 2020

@snie2012 thanks for the suggestion !

Could you please detail about your scenario the following question:

  • do we need to compute metrics during the training phase = batch-wise metric with updating model ?

@snie2012
Copy link
Author

snie2012 commented Apr 7, 2020

Sorry for the late response.

@sdesrozis
Unfortunately that doesn't work, because I don't know the filter cadence beforehand. It can be random during training.

@vfdev-5
I am actually talking about epoch-wise metric. But those metric will only be updated in a subset of all iterations. Does this clarify your question?

@sdesrozis
Copy link
Contributor

Ok I understand your need :) Let's see how address that.

@vfdev-5 vfdev-5 added this to To do in 0.4.0 via automation Apr 22, 2020
@sdesrozis sdesrozis self-assigned this Apr 24, 2020
@sdesrozis
Copy link
Contributor

sdesrozis commented Apr 24, 2020

@snie2012

I’m working on this issue. I would be on the same line as you. You said that these metrics should be updated in a subset of iterations. So during epoch, some batch are not considered. I understand that this subset is not known, so we can’t use filter.

Questions :

  • How do you know / inform when update the metric ? Only using custom event or flags or something else ?
  • Is it a collective update ? I mean all process update together (or not) ?

Thanks for you help 🙏🏻

@snie2012
Copy link
Author

@sdesrozis Thanks a lot for working on this and apologies for late reply. Answer your questions:

  1. I use custom event to inform an update, as shown in the code snippet at the top of the issue.
  2. Yes, it is a collective update and should behave so in a distributed manner.

@snie2012
Copy link
Author

snie2012 commented Apr 26, 2020

@sdesrozis Would you mind showing an example of using the changes you made in sdesrozis:issue_874 to update on custom events? My understanding is that I can create a customized MetricUsage class to do that.

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Apr 26, 2020

@snie2012 to detail "1.", what is exactly your custom event ? Is it a built-in event from Events or a your own registered event ?

About "2." and what you posted in the very begining of the issue:

This is doable, but would be nice to have it by default in ignite. Also, this manual overwrite doesn't work with DDP. The cause is related to _internal_attach in metrics_lambda.py.

Could you, please, provide a complete example of how do you use it with MetricLambda ? Thanks

@sdesrozis
Copy link
Contributor

Yes, MetricUsage should be the way to configure your metric.

Epoch-wise accuracy :

engine = ...
acc = Accuracy()
acc.attach(engine, "acc", usage="epoch_wise")  # default

It means reset of metric at Event.EPOCH_STARTED, update at Event.ITERATION_COMPLETED and complete at Event.EPOCH_COMPLETED

Batch-wise accuracy :

engine = ...
acc = Accuracy()
acc.attach(engine, "acc", usage="batch_wise")

It means reset of metric at Event.ITERATION_STARTED, update at Event.ITERATION_COMPLETED and complete at Event.ITERATION_COMPLETED

You can define a custom usage

my_usage = YourCustomUsage()
acc = Accuracy()
acc.attach(engine, "acc", usage= my_usage)

Here, YourCustomUsage must inherit from MetricUsage and you customize when reset, update and complete of metric occur.

HTH

@ericspod ericspod mentioned this issue May 6, 2020
3 tasks
@vfdev-5 vfdev-5 moved this from To do to In progress in 0.4.0 May 10, 2020
0.4.0 automation moved this from In progress to Done May 18, 2020
@vfdev-5
Copy link
Collaborator

vfdev-5 commented May 18, 2020

@snie2012 we just merged the PR #979 with MetricUsage, it will be available today or tomorrow in nightly releases. If you can give it a try and leave us a feedback, it would be awesome! Thanks !

@snie2012
Copy link
Author

Will do. Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
No open projects
0.4.0
  
Done
3 participants