Skip to content

Conversation

failable
Copy link
Contributor

@failable failable commented Apr 23, 2020

Related to #959

Description:

This commit add support for nested metric values. If Metric.compute returns a dict, its values will be flattened into engine.state.metrics in Metric.completed.

Note that, in this case the value types of the returned dict is limited(i.e. Dict[Any, non number type] is not allowed), while the original allowed type of Metric.compute is Any. We can also allow other types of dict values, but this may be lead unclear behaviors to users when they try to return (different kinds of )dicts.

Check list:

  • New tests are added (if a new feature is added)
  • New doc strings: description and/or example code are in RST format
  • Documentation is updated (if required)

return tensor.item()
return tensor

def ensure_metric_value(self, value: Any, raise_error=True) -> Union[numbers.Number, torch.Tensor]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don’t know if an exception should be raised. Metrics are very generics, it can work with ˋAny`. Metrics associated to loggers should use restricted types.

@vfdev-5 ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree metrics are generics. Maybe remove type checking here and move to Logger later when necessary?

Copy link
Contributor

@sdesrozis sdesrozis Apr 23, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, let’s try this! Just flat dict here (with compatible existing tests)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just found one more issue, if Any type is allowed, shall we deep flatten a dict in this case? Or shall me only care about dict contains all number values?

Copy link
Collaborator

@vfdev-5 vfdev-5 Apr 23, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO, we should not check the output type and raise an error and just keep previous behaviour.
Previously, we had a part of code that applied .item() on torch scalars. Maybe just factorize this code can be enough.

Copy link
Contributor

@sdesrozis sdesrozis Apr 23, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. At the moment, we should flat only the result of metrics. IMHO, what is embedded in dict is related to the metric itself.

We would notice in doc that generic result type ˋAny` is a generic purpose but interoperability with others handlers / features implies some restrictions : scalars (number, tensor) and dict of scalars.

What do you think ?

And thank you again for your help 🙏🏻

if torch.is_tensor(result) and len(result.shape) == 0:
result = result.item()
engine.state.metrics[name] = result
if isinstance(result, dict):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if isinstance(result, dict):
if isinstance(result, Mapping):

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Apr 23, 2020

IMO, idea of the PR is to unnest one level of mappings (e.g. dict) as @Isolet's use-case. Thoughts ?

@failable
Copy link
Contributor Author

failable commented Apr 24, 2020

My original thought is single enough just to flatten a level one dict of metrics. But as metrics is so generic, now I think maybe we should also not to make level one dict anything special? Before I opened the issue, I used the following snippet to get my goal.

class MultipleMetricsOutputHandler(OutputHandler):

  def _setup_output_metrics(self, engine):
      metrics = {}
      if self.metric_names is not None:
          if (isinstance(self.metric_names, str)
                  and self.metric_names == 'all'):
              metrics = flatten_dict(engine.state.metrics)
          else:
              for name in self.metric_names:
                  if name not in engine.state.metrics:
                      warnings.warn('Provided metric name \'{}\' is missing '
                                    'in engine\'s state metrics: {}'.format(
                                        name,
                                        list(engine.state.metrics.keys())))
                      continue
                  metrics[name] = engine.state.metrics[name]

      if self.output_transform is not None:
          output_dict = self.output_transform(engine.state.output)

          if not isinstance(output_dict, dict):
              output_dict = {'output': output_dict}

          metrics.update(
              {name: value
               for name, value in output_dict.items()})

      return metrics

How about keeping anything of Metric unchanged? Is there a better place to achieve my goal? I'm not familiar enough with ignite so I can not get an answer now ...


But still, I found some functions like save_best_model_by_val_score and add_early_stopping_by_val_score access engine.state.metrics value directly, flattening may still has its place...Hooks can help?

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Apr 24, 2020

@Isolet I think one level flattening is a good level of simplifying things and keeping Metric rather generic. For other non-trivial cases, use should write its adapters.

Returns:
Any: the actual quantity of interest.
Any: the actual quantity of interest. However, if a :class:`dict` is returned, it should contain
:class:`numbers.Number` or :class:`torch.tensor` values and will flattened into
Copy link
Collaborator

@vfdev-5 vfdev-5 Apr 24, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's remove those type limitations:

Any: the actual quantity of interest. However, if a :class:`dict` is returned, it will flattened 
into `engine.state.metrics` when :func:`~ignite.metrics.Metric.completed` is called.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

@sdesrozis
Copy link
Contributor

LGTM but CI quite unstable...

Copy link
Contributor

@sdesrozis sdesrozis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thank you very much !!

@sdesrozis sdesrozis merged commit 68452b9 into pytorch:master Apr 24, 2020
@failable
Copy link
Contributor Author

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants