Skip to content

Implementation of VarNaming task from ICLR '18 #2

@haighal

Description

@haighal

Hi,

My name is Alex Haigh, and I'm a master's student at Stanford. For a project, I'm working to reproduce (and hopefully extend) some of your results on VarNaming from your '18 ICLR paper. My understanding of your model for that task is that you:

  1. Replace each instance of your target variable with a <SLOT> token
  2. Represent every other variable as a concatenation of the average of the (learnable) embedding for each subtoken in its name and the type of the variable (as described on the top of p. 5) here
  3. Run message passing with a GGNN for 8 timesteps, using the program graph
  4. Average the final representation of every <SLOT> token
  5. Use this as input to a GRU decoder that outputs the variable name as a sequence of subtokens.

I found the dataset here, and it looks like it's in the format digestible by utils/tensorise.py. Similarly, the model you use for VarNaming seems to be the Graph2SeqModel.

So, is this all you need to do to reproduce the results?

  • run utils/tensorise.py --model graph2seq on the dataset published in ICLR '18
  • train a graph2seq model on the dataset using utils/train.py PATH_TO_TENSORISED_GRAPHS --model graph2seq

Just wanted to make sure I'm looking in the right place, and would also appreciate any other tips you have. Also, what modifications did you make to the model based on Cvitovic et al? And is there a way you can compare results with/without those modifications?

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions