Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to test on only one GPU? #21

Closed
VoiceBeer opened this issue May 11, 2019 · 7 comments
Closed

How to test on only one GPU? #21

VoiceBeer opened this issue May 11, 2019 · 7 comments

Comments

@VoiceBeer
Copy link

Hi, we are a group of students and is reproducing this BigGAN model for our coursework, one question is that we only have one GPU on the Colab, and is wondering how to modify the model, BTW, we are trying to use another dataset, there are some problems too. Hope to get your reply, really appreciate it :).

@VoiceBeer
Copy link
Author

Sorry I didn't find that there is "Using your own dataset" in the README file, but still wondering is it possible to use only one GPU.

@christegho
Copy link

christegho commented May 11, 2019

You should be able to find example commands on how to train your own model in the folder scripts. On colab, with one GPU, you should be able to train a model on a reduced batch size, I think a batch size 24 for 128x128 image outputs should work. You might also need to reduce the num_workers. You can increase num_G_accumulations and num_D_accumulations to remedy for the reduced batch size.

@VoiceBeer
Copy link
Author

Thank you very much @christegho ! We are now working on how to use our own dataset, will try to modify the batch size and the iteration in the late work, and also thank you for your advice on the accumulations.

@danielhuoo
Copy link

You should be able to find example commands on how to train your own model in the folder scripts. On colab, with one GPU, you should be able to train a model on a reduced batch size, I think a batch size 24 for 128x128 image outputs should work. You might also need to reduce the num_workers. You can increase num_G_accumulations and num_D_accumulations to remedy for the reduced batch size.

Thanks for your reply.
If we use our own dataset, I mean, we have a large number of images, what's the structure of folders should have?

@christegho
Copy link

in the directory with all the scripts for training, create a folder data. In data, create another folder with the name of your dataset, for example imagenet.

The folder imagenet will have a folder for every class, containing all images for each class.

@ajbrock
Copy link
Owner

ajbrock commented May 11, 2019

As mentioned above, the amount of compute/VRAM you have determines the max batch size you can use, with bigger being better (up to a proportion of the size/complexity of your dataset). You can always spoof bigger batches with gradient accumulation, assuming you can fit at least a small batch into your setup.

@ajbrock ajbrock closed this as completed May 11, 2019
@VoiceBeer
Copy link
Author

thank you guys, very helpful 😆 @christegho @ajbrock

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants