-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
> It is a very nice work and inspires me a lot. How do you evaluate the predictions generated from LLM? The paper claims that evaluating with Acc and F1. However it can be hard to evaluate the text sometimes. #41
Comments
Hi, I'm also interested in this work and was wondering if the evaluation code you mentioned is updated? Can you clarify which part exactly? |
Thanks for your interests! An example is in https://github.com/HKUDS/GraphGPT/blob/main/scripts/eval_script/cal_metric_arxiv.py |
Thank you for your reply, as I am not very familiar with this dataset, I would like to ask where to get the label information, i.e. where can I get the labelidx2arxivcategeory.csv in the code? |
You could refer to https://github.com/mims-harvard/GNNGuard/tree/master/Datasets/ogbn_arxiv/mapping . |
Thanks for your reply, I will download the label data and test it. |
You are welcome!🤗 |
Thank you for your interest in our GraphGPT. I apologize for the delayed response due to the academic workload at the end of the semester.
The evaluation code will release by the end of this week!
Wishing you an early Merry Christmas!
Originally posted by @tjb-tech in #19 (comment)
The text was updated successfully, but these errors were encountered: