Skip to content
This repository has been archived by the owner on Jul 7, 2023. It is now read-only.

Custom evaluation metrics #1336

Merged
merged 3 commits into from Jan 4, 2019
Merged

Custom evaluation metrics #1336

merged 3 commits into from Jan 4, 2019

Conversation

ywkim
Copy link
Contributor

@ywkim ywkim commented Jan 2, 2019

This PR lets custom evaluation metrics to be defined at problem level.

It would be nice if we could add custom metrics to problems. (See also #822) I would like to propose a new method to the Problem that returns metric functions.

def eval_metric_fns(self, model_hparams):
  metric_names = self.eval_metrics()
  return {
      metric_name: metrics.METRICS_FNS[metric_name]
      for metric_name in metric_names
  }

@googlebot googlebot added the cla: yes PR author has signed CLA label Jan 2, 2019
@afrozenator
Copy link
Member

Thanks a lot for doing this @ywkim ! This is a very good idea and we had multiple folks requesting this internally as well -- so thanks a lot again for doing this!

@afrozenator afrozenator merged commit dbab44c into tensorflow:master Jan 4, 2019
tensorflow-copybara pushed a commit that referenced this pull request Jan 4, 2019
PiperOrigin-RevId: 227913649
kpe pushed a commit to kpe/tensor2tensor that referenced this pull request Mar 2, 2019
* Custom evaluation metrics

* Fix Python 2 compatibility issue

* Fix notebook test
kpe pushed a commit to kpe/tensor2tensor that referenced this pull request Mar 2, 2019
PiperOrigin-RevId: 227913649
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
cla: yes PR author has signed CLA
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants