-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What kind of gpu environment did you use to train the model? #7
Comments
Hi @kambehmw, Thank you for trying our repository! Are you trying to extend ContraCode to a new programming language? Pretraining is memory hungry as contrastive learning benefits from large batch sizes (see https://arxiv.org/abs/2002.05709). Moreover, the transformer backbone we leverage uses significantly more memory than typical image classification architectures. We generally performed pretraining over 4 16GB V100 GPUs, 2 48GB RTX 8000 GPUs or 4 24GB RTX 6000 GPUs. We provide pretrained checkpoints due to the large cost of pretraining. Given the lower memory capacity of the RTX 2080TI, I would recommend (1) reducing the sequence length for the Transformer encoder, (2) decreasing the hidden dimension size of our model and (3) adding checkpoint annotations for gradient checkpointing (e.g. PyTorch gradient checkpointing). Thanks, |
Thanks for your reply. I am tinkering with your source code to try to understand the methodology of your paper. Thanks for the memory saving advice as well. I'll also consider using GPUs with good specs in the cloud as an option. Thanks again. |
@kambehmw Thanks! Happy to discuss further as well. My email in my profile. |
@parasj GCP:
As in the README, the command to be executed is as follows
The |
I tried to learn both pretrain and fine-tuning with one RTX-2080ti, but it takes a lot of time. What kind of learning environment did you use?
I would appreciate it if you could tell me the specs and number of gpu's you used for model pretraining and fine-tuning.
The text was updated successfully, but these errors were encountered: