You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I am trying to generate HD images, but results are not so good and I guess it is because I am training the models using lower resolution images (256x256).
Unfortunately, if I try to train these networks using higher resolution images I run out of memory. So, I was wondering if it is possible to split the networks over multiple GPUs and run the model over multiple devices.
Thanks
The text was updated successfully, but these errors were encountered:
This multi-gpu support would be a great feature.
With the current implementation, you can also crop patches from high-res images (e.g. crop 256x256 patches from 512x512 images. Set loadSize=512, and fineSize=256 for both training and test). This will save you gpu memory during training, while still allows you to test your model on high-res images.
Hi,
I am trying to generate HD images, but results are not so good and I guess it is because I am training the models using lower resolution images (256x256).
Unfortunately, if I try to train these networks using higher resolution images I run out of memory. So, I was wondering if it is possible to split the networks over multiple GPUs and run the model over multiple devices.
Thanks
The text was updated successfully, but these errors were encountered: