Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

move to torch.optim for optimization #17

Closed
Mayukhdeb opened this issue Jan 25, 2021 · 0 comments
Closed

move to torch.optim for optimization #17

Mayukhdeb opened this issue Jan 25, 2021 · 0 comments
Labels
top priority most likely needed before the next release/new feature

Comments

@Mayukhdeb
Copy link
Owner

Mayukhdeb commented Jan 25, 2021

Currently, the optimization is done using a simple implementation which goes as:

image_tensor.data = image_tensor.data + lr *(gradients_tensor.data /grad_norm)

to something like

optim = optim.some_optimizer(image_tensor.parameters(), lr= config["learning_rate"], momentum=0.9)
loss.backward()

Have a loot at the shampoo optimizer, it can be used for "preconditioning"

Two advantages:

  • Cleaner code
  • (maybe) better performance
@Mayukhdeb Mayukhdeb changed the title move to torch.optim.Adam for optimization move to torch.optim for optimization Mar 11, 2021
@Mayukhdeb Mayukhdeb added the top priority most likely needed before the next release/new feature label Mar 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
top priority most likely needed before the next release/new feature
Projects
None yet
Development

No branches or pull requests

1 participant