Skip to content
This repository has been archived by the owner on Jun 14, 2023. It is now read-only.

Multiple GPU training #19

Open
rahimentezari opened this issue Jun 8, 2022 · 1 comment
Open

Multiple GPU training #19

rahimentezari opened this issue Jun 8, 2022 · 1 comment

Comments

@rahimentezari
Copy link

Thanks for sharing the code. I was going to use your code to train SimCLR on ImageNet-1K, but could not use multiple gpus on one machine. Can you please let me know how I should use multiple gpus? also how the hyperparameters change with respect to number of gpus?

Secondly, what do you recommend for ImageNet1K hyperparameters, e.g. LR, batch-size, etc?

@mitchellnw
Copy link

not an official answer but for the first question of how you can train on multiple gpus you can do torchrun --nproc_per_node=<how-many-gpus> main.py

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants