New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The num_corrections default value is actually 100, not 0 #7
Comments
For completeness' sake... the fix would be to either set |
Does neural-style or neural-style-pt work with a history size value of |
From looking at the lbfgs source, I think it would error out. People who are unfamiliar with lbfgs (myself included) might try to "increase" the history size by setting it to 1 or 20 or 50, without realizing that they are actually decreasing the history size from the default of 100. I see it as a consistency thing, especially when setting other parameters like tv_weight to 0 really does disable them. It seems more natural to use a default for lbfgs_num_corrections of 100 if that's actually what the underlying default is |
Upon testing, it seems that a history size of zero does indeed result in an error:
|
This change has now been implemented in the multi-gpu branch: #20, and it has been implemented in the pip package as well. |
- You can now use multiple GPUs in the same way that you could in the original neural-style with the `-multidevice_strategy` parameter. #2 - You can use any combination of GPUs and your CPU as devices with the `-multidevice_strategy` parameter. - New `-disable_check` parameter for advanced users. #5 - AMD GPU support. - Changed -lbfgs_num_correction default. #7
In neural-style-pt, (and the original neural-style.lua, actually), if params.lbfgs_num_corrections is not set, the default is 0 (line 29). Then, the
optim.history
object is only updated ifparams.num_corrections > 0
:However, the
torch.optim.lbfgs
class uses a default value of 100 ifhistory_size
is not set:torch.optim.LBFGS(params, lr=1, max_iter=20, max_eval=None, tolerance_grad=1e-05, tolerance_change=1e-09, history_size=100, line_search_fn=None)
See the lbfgs section:
https://pytorch.org/docs/stable/optim.html
Thus, if somebody tries to set lbfgs_num_corrections to 0 (the default) to save memory they will actually use size 100.
The text was updated successfully, but these errors were encountered: