New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When use batch_size>2 #23
Comments
@zhongtao93 it is an out of memory error |
But I have four 32G GPUs, it's also not work when use --gpus 4 |
@zhongtao93 try to decrease batch size |
Thank you, decrease to 3 is work. Another question, the Prediction head of MobileStyleGAN is only useful for training, when model forward, those part can skip. It's correct? |
@zhongtao93 auxiliary heads will be skipped when MobileStyleGAN converts to ONNX or CoreML |
Thank you. Do you think this model is likely to run realtime with the output size of 256x256 on the mobile phone? After reduce the number of channels or other Pruning method. |
@zhongtao93 it works on iPhone 12 for 4fps with the output size of 1024x1024, we didn't test for less resolution. |
Thank you! I will continue to study the work of improving speed. |
Hi,I have a same question.Did you have any results? |
Hi, thank you for sharing code! As the default batch_size=2, training is fine. But when use batch_size = 8 , get error as bellow:
Hoping for your reply!
The text was updated successfully, but these errors were encountered: