Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model parallelism #13

Closed
pedropgusmao opened this issue Apr 23, 2017 · 1 comment
Closed

Model parallelism #13

pedropgusmao opened this issue Apr 23, 2017 · 1 comment

Comments

@pedropgusmao
Copy link

Hi,
I am trying to generate HD images, but results are not so good and I guess it is because I am training the models using lower resolution images (256x256).
Unfortunately, if I try to train these networks using higher resolution images I run out of memory. So, I was wondering if it is possible to split the networks over multiple GPUs and run the model over multiple devices.

Thanks

@junyanz
Copy link
Owner

junyanz commented Apr 26, 2017

This multi-gpu support would be a great feature.
With the current implementation, you can also crop patches from high-res images (e.g. crop 256x256 patches from 512x512 images. Set loadSize=512, and fineSize=256 for both training and test). This will save you gpu memory during training, while still allows you to test your model on high-res images.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants