Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#3 GraphSAGE pseudocode/demo #12

Merged
merged 7 commits into from Apr 30, 2018
Merged

#3 GraphSAGE pseudocode/demo #12

merged 7 commits into from Apr 30, 2018

Conversation

kjun9
Copy link
Contributor

@kjun9 kjun9 commented Apr 20, 2018

No description provided.

@youph
Copy link
Contributor

youph commented Apr 23, 2018

Code Review

Comments

The code runs, the loss decreases during training as expected. However, the f1 scores do not seem to change along with the loss - this is somewhat suspicious, as I'd expect the f1 scores to increase while the loss decreases. Might be worth checking how the f1 scores are evaluated.
The code is lean and reasonably easy to follow, even without comments.

Suggestions for @kjun9

  • Please add comments to help understand the code's purpose and workflow
  • Perhaps add a sentence or two referencing the example use case in the paper, being done by this code?
  • The final results on the test set are cryptic, could you please make them a bit more verbose, so that at least it was clear that it was the final test score?
  • Might be worth checking how the f1 scores are evaluated (see the Comments above).

@kjun9
Copy link
Contributor Author

kjun9 commented Apr 23, 2018

  • python -m example.example_graphsage should be what was in the README already, could you double check you're seeing this?
  • I'll make the other changes and also double check the f1 scores from the original implementation to see if I've missed something.

@youph
Copy link
Contributor

youph commented Apr 23, 2018

Yes, the run command was correct, my bad.

@kjun9
Copy link
Contributor Author

kjun9 commented Apr 30, 2018

@youph Thanks for the suggestions, please review the changes I've made accordingly. Most importantly, I found out I was missing relu's in between each of the aggregation layers (despite having drawn boxes for them for the presentation...), and so now the accuracy results look a lot more comparable to the original implementation. Also added some more comments and optional parameters and a little bit of info about the example data from the original graphSAGE codebase

@youph
Copy link
Contributor

youph commented Apr 30, 2018

@kjun9 Thanks for fixing the code. Yes, now the f1 scores look better; in particular, they improve together with the loss.
May I ask you to also add description (purpose, input args, output) to
def create_iterator(graph, nl, nf): (line 24 in example_graphsage.py)

@kjun9
Copy link
Contributor Author

kjun9 commented Apr 30, 2018

@youph sure, you can check the descriptions I've added now

@youph
Copy link
Contributor

youph commented Apr 30, 2018

@kjun9 Thanks, all good now.

@kjun9 kjun9 merged commit 529592d into develop Apr 30, 2018
@kjun9 kjun9 deleted the graphsage branch April 30, 2018 04:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants