New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add adahessian #169
Comments
Thanks! I will take a look. |
Since authors licensed code under unfriendly GPL V3, I can not reuse that code here and unfortunitly have to reproduce new implementation from paper. Commercial usage of of authors implementation is limited, since forces users to license potentially commercial code under GPL also. |
@jettify this is a re-implementation from scratch of the paper, with a MIT license |
Hi @jettify, First, thanks so much for creating this repository, it is a great resource. I am one of the Adahessian authors, and we have just changed the license to the more friendly MIT License. I also just opened a new pull request that adds Adahessian optimizer to your repository: I got the following results when I ran the optimizer on the Rosenbrock and Rastrigin test:
I have also attached the visualization for each test: P.S: Based on the results I got, it seems Adahessian gets the best loss value among the optimizers P.S2: @bratao Thanks so much for adding the Adahessian optimizer and for promoting it. |
Just merged PR with optimizer, all thanks to @amirgholami ! |
PyPI is also updated https://pypi.org/project/torch-optimizer/0.1.0/ |
Hello,
I´m a big fan of this project. Recently a new optimized has been proposed, that promises SOTA for many tasks.
https://github.com/amirgholami/adahessian
The possibility of using it here is very good!
The text was updated successfully, but these errors were encountered: