Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DEBUG logging for _api.py was accidentally enabled #104

Closed
codablock opened this issue Oct 8, 2021 · 9 comments · Fixed by #108
Closed

DEBUG logging for _api.py was accidentally enabled #104

codablock opened this issue Oct 8, 2021 · 9 comments · Fixed by #108

Comments

@codablock
Copy link

I've just seen that filelock started to print DEBUG logs even though I'm using INFO level globally. I assume this got in accidentally?

@gaborbernat
Copy link
Member

Not at all, see #97 (comment)

@codablock
Copy link
Author

The _LOGGER.setLevel(logging.DEBUG) call causes all log messages with severity debug to be emitted even if the root logger is set to have log level INFO or WARNING. This is clearly unexpected. This means: Even though I've set the root logging level to WARNING, filelock logs with DEBUG severity appear all over the place unless I manually set the filelock logger's log level to WARNING.

@gaborbernat
Copy link
Member

gaborbernat commented Oct 8, 2021

That's not true 😊 It depends on your log handler. Generally, when setting log levels you should fine-tune it at the log handler level, not the logger itself.

from logging import StreamHandler

handler = StreamHandler()
handler.setLevel(logging.WARNING)

The level on the root log handler is not what level all child loggers should be but instead is the level to pass through for when users don't specify a name for the logger.

This is clearly unexpected.

I think what's unexpected is expecting changing the root loggers level to have an impact on other loggers.

@codablock
Copy link
Author

codablock commented Oct 8, 2021

The most basic configuration of python logging does not involve manually adding/modifying handlers. For example, if you use logging.baseConfig(level=logging.INFO), one would expect that only INFO level messages get logged. So far, every library that I encountered behaved that way, because they do not touch the logging level of their loggers.

In the filelock case however, I now have to introduce manually configured handlers, even though my application is only interested in basic logging to stdout/stderr.

I see that we have different understandings of the relations between loggers and handlers. I might be missing something and I'm happy to learn something new :)

EDIT: The described baseConfig() call set the root logger level and not the handler level.

@gaborbernat gaborbernat reopened this Oct 8, 2021
@gaborbernat
Copy link
Member

EDIT: The described baseConfig() call set the root logger level and not the handler level.

That's unfortunate. 🤔

@caenrigen
Copy link

Hi!

First of all, thank you very much for this package, it has been pretty useful in a project I am involved in.

I would like to know what is the status on this issue, and are there any changes planned?


I will leave some of my feedback as well that I hope helps with making a decisions about it.

I am writing to you because our pipelines are now failing due to the introduction of the logging with a default level of DEBUG. Everything is working of course, but now our pipelines get filled with logging messages that were not expected there before, a bit of a breaking change for us at least.

From consulting the python docs https://docs.python.org/3/howto/logging.html#when-to-use-logging:

DEBUG [When it’s used:] Detailed information, typically of interest only when diagnosing problems.
INFO [When it’s used:] Confirmation that things are working as expected.
WARNING [When it’s used:] An indication that something unexpected happened, or indicative of some problem in the near future (e.g. ‘disk space low’). The software is still working as expected.

The default level is WARNING, which means that only events of this level and above will be tracked, unless the logging package is configured to do otherwise.

If the using application does not use logging, and library code makes logging calls, then (as described in the previous section) events of severity WARNING and greater will be printed to sys.stderr. This is regarded as the best default behaviour.

And perhaps also relevant:

Note It is strongly advised that you do not add any handlers other than NullHandler to your library’s loggers. This is because the configuration of handlers is the prerogative of the application developer who uses your library. The application developer knows their target audience and what handlers are most appropriate for their application: if you add handlers ‘under the hood’, you might well interfere with their ability to carry out unit tests and deliver logs which suit their requirements.

Considering the contents of the python documentation, would you consider leaving the logger level unspecified such that it defaults to WARNING?

That would solve the issue discussed above and the issue that I am facing myself. I almost never had to add extra lines of code to just silence debug/info logs of specific packages, and would say that is not a very smooth experience for developers if for example most packages required adding these extra lines for each package (and potentially for each module and submodule of a package).

Hope this helps 😊 Looking forward to your thoughts!
Cheers

@gaborbernat
Copy link
Member

Waiting for someone to put in a PR to fix it 😊

@quinncomendant
Copy link

I came here to report this too, but I could never have explained the issue as eloquently as @caenrigen. Thanks!

@caenrigen
Copy link

@gaborbernat Thank you very much for resolving this 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants