Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytorch Lighting arguments marked as Optional but needed #304

Closed
TommasoBendinelli opened this issue Jan 13, 2021 · 6 comments
Closed

Pytorch Lighting arguments marked as Optional but needed #304

TommasoBendinelli opened this issue Jan 13, 2021 · 6 comments
Assignees
Labels
area / integrations Issue area: integrations with other tools and libs type / bug Issue type: something isn't working

Comments

@TommasoBendinelli
Copy link

TommasoBendinelli commented Jan 13, 2021

Hello!
I am looking at your logger internal, and here I see that the args are optional.

class _PytorchLightningLogger(LightningLoggerBase):
            def __init__(
                self,
                repo: Optional[str] = None,
                experiment: Optional[str] = None,
                train_metric_prefix: Optional[str] = "train_",
                val_metric_prefix: Optional[str] = "val_",
                test_metric_prefix: Optional[str] = "test_",
                flush_frequency: int = DEFAULT_FLUSH_FREQUENCY,
            ):

Nevertheless, if I do not fill anything inside and try to run a model, I get the following error when running model.fit

  File "/home/gem/.pyenv/versions/3.8.6/lib/python3.8/posixpath.py", line 90, in join
    genericpath._check_arg_types('join', a, *p)
  File "/home/gem/.pyenv/versions/3.8.6/lib/python3.8/genericpath.py", line 152, in _check_arg_types
    raise TypeError(f'{funcname}() argument must be str, bytes, or '
TypeError: join() argument must be str, bytes, or os.PathLike object, not 'NoneType'

@gorarakelyan gorarakelyan self-assigned this Jan 13, 2021
@gorarakelyan
Copy link
Contributor

gorarakelyan commented Jan 13, 2021

Hi @TommasoBendinelli ,

Can you please share what arguments you passed to the pytorch lightning logger constructor?

@TommasoBendinelli
Copy link
Author

sure,

from aim.pytorch_lightning import AimLogger
trainer = pl.Trainer(gpus=-1, max_epochs=1000, logger=AimLogger())

@gorarakelyan
Copy link
Contributor

Hm, that's weird. Are you running it from inside virtualenv/conda?

@TommasoBendinelli
Copy link
Author

I am running the code within a virtualenv and using hydra

@gorarakelyan
Copy link
Contributor

Thanks for the details. We are looking into it.

@gorarakelyan gorarakelyan added the type / bug Issue type: something isn't working label Jan 14, 2021
@gorarakelyan gorarakelyan added the area / integrations Issue area: integrations with other tools and libs label Oct 29, 2021
@gorarakelyan
Copy link
Contributor

@TommasoBendinelli just noticed this issue remained unresponded, sorry for a really late reply.
All the loggers have been updated recently, the issue has to be fixed now. If it is not, pls reopen this issue, we will investigate. 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area / integrations Issue area: integrations with other tools and libs type / bug Issue type: something isn't working
Projects
Status: Done
Development

No branches or pull requests

2 participants