Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.Sign up
lbfgs failing with "function value changing less than tolX" when in GPU mode #52
This code is some marvelous work! I'm stunned by the amazing results it can give.
I happened to encounter several small issues, maybe someone would be able to help me with.
I noticed some strange interruptions when rendering an image with lbfgs optimizer. It shows up like this:
The moment it happens is always completely random. I played with weights and other parameters but the funny thing is - it doesn't matter. I can render the same style and source with the same settings several times in a row and it will eventually fail like this. Or if it keeps failing it will eventually go through. When I render JPG sequence using simple bash loop (with the same style and settings) it usually fails once or twice every ten frames and moves on. I can then render failed frames again with the same settings and they finish fine.
Exploring this issue a bit I tried to modify
What is worth mentioning, this happens only in GPU mode for both nn and cudnn backends. If only I could understand why is it happening I would love to investigate it a bit more.
I'm also curious about the memory limitations. I use for example
I'm rendering on GTX 770 (2G of GPU RAM) in GPU and i7 4790k (16G of RAM) in CPU using Ubuntu 14.04.2, Nvidia 352.39, CUDA 7.5.18-19867135 and CUDNN 7.0.
Again, results of this code are just mind blowing. Thank you for sharing this, jcjohnson!
So... Despite the issues mentioned above I managed to push out some visuals for my music.
I would like to share some of my experiences doing those tests:
Some processing info:
Processing and post production thoughts:
Thanks guys, all the best, have fun!
Would be interesting to know whether it's present also for clnn. If it is, then it points to something in the code-base, and if not then it could be something in the driver. There's nothing particularly about GPUs that should mean the numbers are radically different than CPU, other than, GPUs are using 32-bit floats.
Hmmm... did you try CPU, with 32-bit floats? ie ,cast everything to a