Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About ranking metric evaluation #8

Closed
vincenttsai2015 opened this issue Dec 17, 2021 · 2 comments
Closed

About ranking metric evaluation #8

vincenttsai2015 opened this issue Dec 17, 2021 · 2 comments

Comments

@vincenttsai2015
Copy link

vincenttsai2015 commented Dec 17, 2021

Hi,

I'm wondering if it is possible to evaluate the ranking metrics of MAP(mean average precision) @ K and HR(hit ratio) @ K of GBMF/GBGCN under librecframework. If yes, how can I modify the code? Thanks.

@vincenttsai2015 vincenttsai2015 changed the title About metric evaluation About ranking metric evaluation Dec 17, 2021
@Sweetnow
Copy link
Owner

You can modify the framework https://github.com/Sweetnow/librecframework/blob/master/librecframework/metric.py.

Just follow the current code,

  • create a new class derived from Metric
  • implement __call__ (score is the model output, ground_truth is the corresponding data from Dataset.__getitem__
  • plus the metric into self._sum
  • plus the count into self._cnt
  • register the metric class into _ALL_METRICS

And then, you can choose the new metric in config.json by its class name.

@Sweetnow
Copy link
Owner

BTW, I think HR is the same as Precision.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants