Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The num_corrections default value is actually 100, not 0 #7

Closed
ajhool opened this issue Jan 16, 2019 · 5 comments
Closed

The num_corrections default value is actually 100, not 0 #7

ajhool opened this issue Jan 16, 2019 · 5 comments

Comments

@ajhool
Copy link

ajhool commented Jan 16, 2019

In neural-style-pt, (and the original neural-style.lua, actually), if params.lbfgs_num_corrections is not set, the default is 0 (line 29). Then, the optim.history object is only updated if params.num_corrections > 0:

       #line 56
        if params.lbfgs_num_correction > 0:
            optim_state['history_size'] = params.lbfgs_num_correction

However, the torch.optim.lbfgs class uses a default value of 100 if history_size is not set:

torch.optim.LBFGS(params, lr=1, max_iter=20, max_eval=None, tolerance_grad=1e-05, tolerance_change=1e-09, history_size=100, line_search_fn=None)

See the lbfgs section:
https://pytorch.org/docs/stable/optim.html

Thus, if somebody tries to set lbfgs_num_corrections to 0 (the default) to save memory they will actually use size 100.

@ajhool
Copy link
Author

ajhool commented Jan 17, 2019

For completeness' sake... the fix would be to either set optim.history_size = 0 explicitly, or to change the lbfgs_num_correction default value to 100 to reflect the actual default value used by torch.optim.lbfgs.

@ProGamerGov
Copy link
Owner

Does neural-style or neural-style-pt work with a history size value of 0? Because I understood that zero was used because it wasn't a valid option in this case.

@ajhool
Copy link
Author

ajhool commented Jan 18, 2019

From looking at the lbfgs source, I think it would error out. People who are unfamiliar with lbfgs (myself included) might try to "increase" the history size by setting it to 1 or 20 or 50, without realizing that they are actually decreasing the history size from the default of 100. I see it as a consistency thing, especially when setting other parameters like tv_weight to 0 really does disable them. It seems more natural to use a default for lbfgs_num_corrections of 100 if that's actually what the underlying default is

@ProGamerGov
Copy link
Owner

Upon testing, it seems that a history size of zero does indeed result in an error:

Running optimization with L-BFGS
Traceback (most recent call last):
  File "neural_style.py", line 410, in <module>
    main()
  File "neural_style.py", line 243, in main
    optimizer.step(feval)
  File "/usr/local/lib/python2.7/dist-packages/torch/optim/lbfgs.py", line 147, in step
    old_dirs.pop(0)
IndexError: pop from empty list

ProGamerGov added a commit that referenced this issue Sep 21, 2019
ProGamerGov added a commit that referenced this issue Sep 29, 2019
@ProGamerGov
Copy link
Owner

ProGamerGov commented Sep 29, 2019

This change has now been implemented in the multi-gpu branch: #20, and it has been implemented in the pip package as well.

ProGamerGov added a commit that referenced this issue Sep 29, 2019
ProGamerGov added a commit that referenced this issue Sep 29, 2019
- You can now use multiple GPUs in the same way that you could in the original neural-style with the `-multidevice_strategy` parameter. #2

- You can use any combination of GPUs and your CPU as devices with the `-multidevice_strategy` parameter.

- New `-disable_check` parameter for advanced users. #5

- AMD GPU support.

- Changed -lbfgs_num_correction default. 
#7
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants