New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New user questions #112
Comments
Thank you for your comments.
All right, we may consider these as our future plans for openhgnn. Thank you again. |
Thanks for your reply @dddg617 ! Regarding your last point, I am afraid that your current pipeline doesn't save the models, at least not using the script under examples/customization. It only saves the logs. I checked your code for parts doing something like torch.save and checkpoint, and apparently this is only called if early stopping happens. But I checked the code very briefly, so I might be wrong. |
All right, for the last point, currently, we do not support saving models in examples/customization. But we support this in openhgnn/trainerflow. If you use our previous way to run the script, you will get the file .pt in openhgnn/output/{model name}. We will add the same function in examples/customization. |
Hi all, thanks for making this library available. I am trying to use it for my benchmarks, but I am having a bit of trouble.
I want to evaluate my own dataset for recommendation. In the website, there is an example only for node classification. I started to dig in the git repository and found an example for link_prediction under examples/customization.
I decided to settle for link_prediction, because I don't know what would be the equivalent to AsLinkPredictionDataset for recommendation.
I want to compute hits@k, but it is not clear where to change the metric, since I couldn't find the It as an input of AsLinkPredictionDataset, config.ini or OpenHGNN, so I have no idea how to change it.
In OGB benchmarks, they do the hits@k by providing a neg_df and a positive_df and comparing scores_pos > scores_neg. Maybe this could be part of the link_prediction pipeline to support hits@k?
I could also calculate the metric on my own if I could save the predictions, but it is not clear how to do inference or access the model after it is trained. I couldn't find it in the tutorials or examples.
In summary:
It would be nice to have:
thanks very much!!
Felipe
my code
The text was updated successfully, but these errors were encountered: