New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#3 GraphSAGE pseudocode/demo #12
Conversation
Code ReviewCommentsThe code runs, the loss decreases during training as expected. However, the f1 scores do not seem to change along with the loss - this is somewhat suspicious, as I'd expect the f1 scores to increase while the loss decreases. Might be worth checking how the f1 scores are evaluated. Suggestions for @kjun9
|
|
Yes, the run command was correct, my bad. |
@youph Thanks for the suggestions, please review the changes I've made accordingly. Most importantly, I found out I was missing relu's in between each of the aggregation layers (despite having drawn boxes for them for the presentation...), and so now the accuracy results look a lot more comparable to the original implementation. Also added some more comments and optional parameters and a little bit of info about the example data from the original graphSAGE codebase |
@kjun9 Thanks for fixing the code. Yes, now the f1 scores look better; in particular, they improve together with the loss. |
@youph sure, you can check the descriptions I've added now |
@kjun9 Thanks, all good now. |
No description provided.