-
Notifications
You must be signed in to change notification settings - Fork 612
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
training stuck at scale 9:[1999/2000] #19
Comments
Having the same issue running on Google Colab: seems to stall out at |
This seems to be a memory problem. When the number of scales is large, there are more model parameters to store. |
Sorry, I had fat-fingered.(clicked on Close button accidentally.) I 've checked GPU memory usage while training. @tamarott I read your paper. There's an example of the starry night. |
With 16GB of GPU memory, the highest resolution output I have achieved is 667 x 413 from the main training script. Does that seem right? Would changing the aspect ratio let me squeeze more pixels into the model so I can also get more in the final random samples? |
OH, it turned out that scale 0 just worked fine . This is pretty amazing . |
@phonygene what do you mean by you dont need to train over scale 3? Is it possible to generate arbitrary sized images using just scale 3? How? Thank you! |
@rickdotta As I said : In my case, when training scale larger than scale 3 , it only generated identical images, so I tried scale 0 model and found out that it worked fine. I don't understand why it works so differently from the paper, but at least It saves me a lot of time ( troll face ) . |
@phonygene How do you stop training at a smaller scale? |
Training constantly stuck at [1999/2000]
such as "scale 7:[1999/2000]" or "scale 9:[1999/2000]".
Can't interrupt it even if I used ctrl+c, it's totally dead.
I used a mountain picture, resized to same size as one of your sample image.
I'm using :
python 3.6.8
torch 1.3.0
GPU rtx2080ti
NVIDIA Driver 419.35
CUDA 10.1
The text was updated successfully, but these errors were encountered: