Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproducing numbers from the paper on java-small dataset #6

Closed
bzz opened this issue Mar 21, 2019 · 8 comments
Closed

Reproducing numbers from the paper on java-small dataset #6

bzz opened this issue Mar 21, 2019 · 8 comments

Comments

@bzz
Copy link

bzz commented Mar 21, 2019

First of all - thank you for sharing the code of the model and a detailed reproduction instructions!

I tried to reproduce the results from the paper on the java-small dataset using default hyper-parameters from config.py, only changing the batch size to 256 to fit it into the GPU memory, and was able to fetch, preprocess data and train the model.

On validation set, using the best model it got - Precision: 36.24, Recall: 26.89, F1: 30.88
In paper's Table 1 results on java-small are - Precision: 50.64, Recall: 73.40, F1: 43.02

Screen Shot 2019-03-21 at 1 10 17 PM

Here is a notebook with all the steps and the output.

Most probably I just have missed something obvious here and would be very grateful if you could help me by pointing out to the right direction in order to reproduce the paper's results.

Thanks in advance!

@urialon
Copy link
Contributor

urialon commented Mar 21, 2019

Hi Alexander,
First, this table includes the final results, so they are reported on the test set.
Second, this number is still a little low. I don't remember such a difference between the test and validation sets. I suspect that something went wrong with the preprocessing step (probably a timeout), such that you got fewer examples to train on.
To investigate this direction, can you please count the number of lines in each of your java-small training, validation and test sets?

Just cd to the dataset dir and run wc -l *.

@bzz
Copy link
Author

bzz commented Mar 21, 2019

Thank you for the prompt reply!

Indeed, on colab instance some preprocessing has failed, but as state is not persistent there I did not get that stats (will take some hours to re-run, will post back)

And from the same notebook (where some data failed to be preprocessed) results of running evaluation on test set with the best model is:

Evaluation time: 0h2m39s
Accuracy: 0.01070277466367713
Precision: 0.2878609709141995, recall: 0.17890890275846458, F1: 0.22066929919029954

But I did preprocessing twice and training several times on the local machine, just in case (keep some intermediate .csv data), and in both cases results were the same

wc -l data/java-small/*
     1727 data/java-small/java-small.dict.c2s
      291 data/java-small/java-small.histo.node.c2s
     9361 data/java-small/java-small.histo.ori.c2s
     3160 data/java-small/java-small.histo.tgt.c2s
    57019 data/java-small/java-small.test.c2s
    33771 data/java-small/java-small.train.c2s
    23844 data/java-small/java-small.val.c2s

Numbers on the test set from the training logs on the local machine with all the data preprocessed, with increased patience to 20:

Accuracy after 24 epochs: 0.04240
After 24 epochs: Precision: 0.22624, recall: 0.11896, F1: 0.15593
Not improved for 20 epochs, stopping training
Best scores - epoch 4:
Precision: 0.29567, recall: 0.13068, F1: 0.18125

@urialon
Copy link
Contributor

urialon commented Mar 21, 2019

I see, you preprocessed much fewer examples than there are in the dataset. I designed the scripts to work on a 64-core machine, not on colab, so they timed out and less than 5% of the examples were extracted.
Instead of preprocessing on Colab, take the following preprocessed dataset:

https://s3.amazonaws.com/code2seq/datasets/java-small-preprocessed.tar.gz

Regarding training - the default hyperparameters should be OK. In the paper I used (for Java-small specifically):

config.SUBTOKENS_VOCAB_MAX_SIZE = 7300

and:

config.TARGET_VOCAB_MAX_SIZE = 8700

But I think the default vocab sizes will work very similarly.

@urialon
Copy link
Contributor

urialon commented Mar 22, 2019

I added a link to Java-med-preprocessed as well, in the README:
https://github.com/tech-srl/code2seq/blob/master/README.md#datasets

@bzz
Copy link
Author

bzz commented Mar 22, 2019

Thank you! I'll try these out over the weekend and report back.

From a quick glance - the numbers I posted are from training on ~1/10th of the data

@urialon urialon closed this as completed Apr 13, 2019
@claudiosv
Copy link

Regarding training - the default hyperparameters should be OK. In the paper I used (for Java-small specifically):

Hi Uri,

May I ask what parameters you used for the java-med set? Also 190000 and 27000 respectively?

Thank you!

@urialon
Copy link
Contributor

urialon commented Aug 23, 2019

Hi @claudiosv,
Sure. Can you please create a new issue and I'll answer there?
I need to check, but I'm on vacation and I'm afraid that I'll lose your question if it stayed in this closed thread.

@claudiosv
Copy link

@urialon
Thanks Uri, I made a new issue. Enjoy your vacation 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants