Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implemented hits@k metric for link-prediction #675

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

wangz10
Copy link
Collaborator

@wangz10 wangz10 commented Dec 5, 2023

Issue #, if available:

Description of changes:

  • Implemented hits@k metric for link-prediction
  • Refactored GSgnnMrrLPEvaluator and GSgnnPerEtypeMrrLPEvaluator to work with both mrr and hits@k

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@wangz10 wangz10 added the ready able to trigger the CI label Dec 5, 2023
@@ -2076,6 +2076,14 @@ def eval_metric(self):

return eval_metric

@property
def eval_k(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to have a list of ks instead of just 1 k?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think most of the LP/retrieval datasets only select one specific k.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed to eval_hit_k

early_stop_strategy=EARLY_STOP_AVERAGE_INCREASE_STRATEGY):
eval_metric = ["mrr"]
early_stop_strategy=EARLY_STOP_AVERAGE_INCREASE_STRATEGY,
k=100):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is better to define a hit@k evaluator.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that will lead to a lot of duplicated codes. I saw GSgnnRegressionEvaluator allows multiple metrics. What's the reasoning for having evaluator vs metric objects?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have two choices:

  1. define a GSgnnRankingLPEvaluator class and both GSgnnMrrLPEvaluator and GSgnnHitKLPEvaluator will inherate from GSgnnRankingLPEvaluator.
  2. define a GSgnnRankingLPEvaluator class which implements mrr() and hit(), and rename it to GSgnnMrrLPEvaluator and GSgnnRankingLPEvaluator.

I would prefer the first one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready able to trigger the CI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants