Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What kind of gpu environment did you use to train the model? #7

Closed
kambehmw opened this issue Aug 7, 2020 · 4 comments
Closed

What kind of gpu environment did you use to train the model? #7

kambehmw opened this issue Aug 7, 2020 · 4 comments

Comments

@kambehmw
Copy link

kambehmw commented Aug 7, 2020

I tried to learn both pretrain and fine-tuning with one RTX-2080ti, but it takes a lot of time. What kind of learning environment did you use?
I would appreciate it if you could tell me the specs and number of gpu's you used for model pretraining and fine-tuning.

@parasj
Copy link
Owner

parasj commented Aug 9, 2020

Hi @kambehmw,

Thank you for trying our repository! Are you trying to extend ContraCode to a new programming language?

Pretraining is memory hungry as contrastive learning benefits from large batch sizes (see https://arxiv.org/abs/2002.05709). Moreover, the transformer backbone we leverage uses significantly more memory than typical image classification architectures.

We generally performed pretraining over 4 16GB V100 GPUs, 2 48GB RTX 8000 GPUs or 4 24GB RTX 6000 GPUs. We provide pretrained checkpoints due to the large cost of pretraining.

Given the lower memory capacity of the RTX 2080TI, I would recommend (1) reducing the sequence length for the Transformer encoder, (2) decreasing the hidden dimension size of our model and (3) adding checkpoint annotations for gradient checkpointing (e.g. PyTorch gradient checkpointing).

Thanks,
Paras

@parasj parasj closed this as completed Aug 9, 2020
@kambehmw
Copy link
Author

@parasj

Thanks for your reply.

I am tinkering with your source code to try to understand the methodology of your paper.
If I can come up with an idea, I will try to extend ContraCode to a new programming language.

Thanks for the memory saving advice as well. I'll also consider using GPUs with good specs in the cloud as an option.

Thanks again.

@parasj
Copy link
Owner

parasj commented Aug 13, 2020

@kambehmw Thanks! Happy to discuss further as well. My email in my profile.

@kambehmw
Copy link
Author

kambehmw commented Aug 21, 2020

@parasj
Let me ask one more question.
We ran pretraining on the following environment in GCP

GCP:

  • n1-standard-16 (vCPU x 16, 60 GB memory)
  • V100 x 4
  • HDD 512GB

As in the README, the command to be executed is as follows

python representjs/pretrain_distributed.py pretrain_lstm2l_hidden \
  --num_epochs=200 --batch_size=512 --lr=1e-4 --num_workers=4 \
  --subword_regularization_alpha 0.1 --program_mode contrastive --label_mode contrastive --save_every 5000 \
  --train_filepath=data/codesearchnet_javascript/javascript_augmented.pickle.gz \
  --spm_filepath=data/codesearchnet_javascript/csnjs_8k_9995p_unigram_url.model \
  --min_alternatives 2 --dist_url tcp://localhost:10001 --rank 0 \
  --encoder_type lstm --lstm_project_mode hidden --n_encoder_layers 2

The data/codesearchnet_javascript/javascript_augmented.pickle.gz file will take quite a long time to load.
It takes about 1-2 hours to load the file and it's not finished. How long did it take you to load it when you ran it? (Is it because of the GCP environment?)
Also, do you have any ideas for reducing the load time?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants