Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'Yogi' object has no attribute 'Yogi' #170

Closed
sailfish009 opened this issue Aug 10, 2020 · 5 comments
Closed

'Yogi' object has no attribute 'Yogi' #170

sailfish009 opened this issue Aug 10, 2020 · 5 comments

Comments

@sailfish009
Copy link

sailfish009 commented Aug 10, 2020

Hi, if calling yogi from pytorch-optimizer has some bug ( runtime error, Yogi object has no attribute Yogi).
so at this moment, i am calling yogi.py directly.

#import torch_optimizer as optim      # in second iteration (for statement), it getting error. 
from yogi import Yogi                        # import from yogi.py file (include types.py definition)

for fold, (train_idx, val_idx) in enumerate(...):
    model = Net(...)
    # optim = optim.Yogi(model, parameters(), lr=1e-2, betas=(0.9, 0.999), eps=1e-3, initial_accumulator=1e-6, weight_decay=0)
    optim = Yogi(model, parameters(), lr=1e-2, betas=(0.9, 0.999), eps=1e-3, initial_accumulator=1e-6, weight_decay=0)
    ...
@jettify
Copy link
Owner

jettify commented Aug 10, 2020

I just tried locally:

In [1]: import torch_optimizer as optim

In [2]: optim.Yogi.__doc__
Out[2]: 'Implements Yogi Optimizer Algorithm.\n    It has been proposed in `Adaptive methods for Nonconvex Optimization`__.\n\n    Arguments:\n        params: iterable of parameters to optimize or dicts defining\n            parameter groups\n        lr: learning rate (default: 1e-2)\n        betas: coefficients used for computing\n            running averages of gradient and its square (default: (0.9, 0.999))\n        eps: term added to the denominator to improve\n            numerical stability (default: 1e-8)\n        initial_accumulator: initial values for first and\n            second moments (default: 1e-6)\n        weight_decay: weight decay (L2 penalty) (default: 0)\n\n    Example:\n        >>> import torch_optimizer as optim\n        >>> optimizer = optim.Yogi(model.parameters(), lr=0.01)\n        >>> optimizer.zero_grad()\n        >>> loss_fn(model(input), target).backward()\n        >>> optimizer.step()\n\n    __ https://papers.nips.cc/paper/8186-adaptive-methods-for-nonconvex-optimization  # noqa\n\n    Note:\n        Reference code: https://github.com/4rtemi5/Yogi-Optimizer_Keras\n    '

working as expected. Also modified MNIST example https://github.com/jettify/pytorch-optimizer/blob/master/examples/mnist.py working as expected. (not counting tests)

If you think problem on library side, can you please create small repro script with so I can debug locally?

@sailfish009
Copy link
Author

Here is the kernel what i was using yogi optimizer with it, so much time consuming, i didn't checked it again.
https://www.kaggle.com/nroman/melanoma-pytorch-starter-efficientnet

@jettify
Copy link
Owner

jettify commented Aug 11, 2020

Unfortunately can not run it myself:
Screenshot 2020-08-11 at 10 15 00

@sailfish009
Copy link
Author

i had tested that python script in my local pc, but had to download dataset(~100GB)

@jettify
Copy link
Owner

jettify commented Jan 9, 2021

Closing for now, feel free to reopen if you have additional information how to reproduce this issue.

@jettify jettify closed this as completed Jan 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants