Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems with own logging configuration #889

Closed
JustinDroege opened this issue Dec 9, 2022 · 7 comments · Fixed by #911
Closed

Problems with own logging configuration #889

JustinDroege opened this issue Dec 9, 2022 · 7 comments · Fixed by #911

Comments

@JustinDroege
Copy link

Currently I have the problem that my logging configuration is not accepted everywhere. As soon as the REST server starts (Uvicorn Worker), my logging configuration is ignored. I have created a repo that represents my scenario and also which is configuration used. Maybe my configuration is just wrong. In the model itself, I print out all the loggers and the associated handlers and formatter and can see here that it should actually fit. Do you have any ideas?

Here is my small example repo: https://github.com/JustinDroege/mlserver-logging

@adriangonz
Copy link
Contributor

Hey @JustinDroege,

If your logging config gets applied correctly, that would suggest an issue with the config itself.

Having said that, it may be worth trying to run your model disabling parallel inference. This can be configured by setting the parallel_workers: 0 flag on your model-settings.json file.

Just for context, when enabled, your model will run on a separate worker process. In the past, we've seen weird issues with logging on the worker pool. Most of those should be fixed now (in the 1.2.0 release), but it could be that there's something else lingering there.

@JustinDroege
Copy link
Author

JustinDroege commented Dec 9, 2022

@adriangonz Thx for your answer!

Here is an example (my configuration is not perfect yet, have some double logs even with no propagation):

2022-12-09 10:55:31,365 [mlserver] INFO - Using asyncio event-loop policy: uvloop
2022-12-09 10:55:31,372 [mlserver] WARNING - Model name 'logger-model' is different than model's folder name 'models'.
{"asctime": "2022-12-09 10:55:31,388", "name": "uvicorn.error", "levelname": "INFO", "message": "Started server process [29102]", "color_message": "Started server process [\u001b[36m%d\u001b[0m]"}
{"asctime": "2022-12-09 10:55:31,390", "name": "uvicorn.error", "levelname": "INFO", "message": "Waiting for application startup."}
{"asctime": "2022-12-09 10:55:31,390", "name": "grpc._cython.cygrpc", "levelname": "DEBUG", "message": "Using AsyncIOEngine.POLLER as I/O engine"}
2022-12-09 10:55:31,395 [mlserver.metrics] INFO - Metrics server running on http://0.0.0.0:8082
{"asctime": "2022-12-09 10:55:31,395", "name": "mlserver.metrics", "levelname": "INFO", "message": "Metrics server running on http://0.0.0.0:8082"}
2022-12-09 10:55:31,395 [mlserver.metrics] INFO - Prometheus scraping endpoint can be accessed on http://0.0.0.0:8082/metrics
{"asctime": "2022-12-09 10:55:31,395", "name": "mlserver.metrics", "levelname": "INFO", "message": "Prometheus scraping endpoint can be accessed on http://0.0.0.0:8082/metrics"}
INFO:     Started server process [29102]
{"asctime": "2022-12-09 10:55:31,395", "name": "uvicorn.error", "levelname": "INFO", "message": "Started server process [29102]", "color_message": "Started server process [\u001b[36m%d\u001b[0m]"}
INFO:     Waiting for application startup.
{"asctime": "2022-12-09 10:55:31,395", "name": "uvicorn.error", "levelname": "INFO", "message": "Waiting for application startup."}
2022-12-09 10:55:31,395 [mlserver] INFO - Loaded model 'logger-model' succesfully.
{"asctime": "2022-12-09 10:55:31,395", "name": "mlserver", "levelname": "INFO", "message": "Loaded model 'logger-model' succesfully."}
INFO:     Application startup complete.
{"asctime": "2022-12-09 10:55:31,395", "name": "uvicorn.error", "levelname": "INFO", "message": "Application startup complete."}
2022-12-09 10:55:31,397 [mlserver.grpc] INFO - gRPC server running on http://0.0.0.0:8234
{"asctime": "2022-12-09 10:55:31,397", "name": "mlserver.grpc", "levelname": "INFO", "message": "gRPC server running on http://0.0.0.0:8234"}
INFO:     Application startup complete.
{"asctime": "2022-12-09 10:55:31,397", "name": "uvicorn.error", "levelname": "INFO", "message": "Application startup complete."}
INFO:     Uvicorn running on http://0.0.0.0:8123 (Press CTRL+C to quit)
{"asctime": "2022-12-09 10:55:31,398", "name": "uvicorn.error", "levelname": "INFO", "message": "Uvicorn running on http://0.0.0.0:8123 (Press CTRL+C to quit)", "color_message": "Uvicorn running on \u001b[1m%s://%s:%d\u001b[0m (Press CTRL+C to quit)"}
INFO:     Uvicorn running on http://0.0.0.0:8082 (Press CTRL+C to quit)
{"asctime": "2022-12-09 10:55:31,398", "name": "uvicorn.error", "levelname": "INFO", "message": "Uvicorn running on http://0.0.0.0:8082 (Press CTRL+C to quit)", "color_message": "Uvicorn running on \u001b[1m%s://%s:%d\u001b[0m (Press CTRL+C to quit)"}
INFO:     127.0.0.1:57532 - "POST /v2/models/logger-model/infer HTTP/1.1" 200 OK
INFO:     127.0.0.1:57532 - "POST /v2/models/logger-model/infer HTTP/1.1" 200 OK
INFO:     127.0.0.1:57532 - "POST /v2/models/logger-model/infer HTTP/1.1" 200 OK

You can see, that the last logs are not formatted even when the uvicorn logs before was formatted. I also used "parallel_workers": 0 here

@JustinDroege
Copy link
Author

Hey @adriangonz

Im pretty sure that is a bug. I tried now different settings.

It seems that when specifying the log configuration via the "logging_settings" parameter, it overwrites my own configuration at 0 workers ("parallel_workers": 0). Here is my log configuration:

{
  "version": 1,
  "disable_existing_loggers": false,
  "formatters": {
    "json": {
      "class": "pythonjsonlogger.jsonlogger.JsonFormatter",
      "format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
    }
  },
  "handlers": {
    "console": {
      "class": "logging.StreamHandler",
      "level": "DEBUG",
      "formatter": "json",
      "stream": "ext://sys.stdout"
    },
    "error_console": {
      "level": "DEBUG",
      "class": "logging.StreamHandler",
      "formatter": "json",
      "stream": "ext://sys.stdout"
    },
    "access": {
      "level": "DEBUG",
      "class": "logging.StreamHandler",
      "formatter": "json",
      "stream": "ext://sys.stdout"
    },
    "default": {
      "level": "DEBUG",
      "class": "logging.StreamHandler",
      "formatter": "json",
      "stream": "ext://sys.stdout"
    }
  },
  "loggers": {
    "mlserver": {
      "level": "DEBUG",
      "handlers": ["console"],
      "propagate": "no"
    },
    "uvicorn": {
      "level": "DEBUG",
      "handlers": ["console"],
      "propagate": "no"
    },
    "uvicorn.error": {
      "level": "DEBUG",
      "handlers": ["console"],
      "propagate": "no"
    },
    "uvicorn.access": {
      "level": "DEBUG",
      "handlers": ["console"],
      "propagate": "no"
    }
  },
  "root": {
    "level": "DEBUG",
    "handlers": ["console"],
    "propagate": "no"
  }
}

You can see that I have actually configured each Uvicorn logger with its own handler.
However, if I now output the current logger configuration during the model load, I get the following result:

Logger: mlserver, Handler: console
Logger: mlserver, Formatter: <pythonjsonlogger.jsonlogger.JsonFormatter object at 0x7f9d0a2c73d0>
Logger: uvicorn, Handler: default
Logger: uvicorn, Formatter: <uvicorn.logging.DefaultFormatter object at 0x7f9d0a30efb0>
Logger: grpc, Handler: None
Logger: grpc, Formatter: None
Logger: kafka, Handler: None
Logger: kafka, Formatter: None
Logger: charset_normalizer, Handler: None
Logger: charset_normalizer, Formatter: None
Logger: uvicorn.access, Handler: access
Logger: uvicorn.access, Formatter: <uvicorn.logging.AccessFormatter object at 0x7f9d0a30eef0>

Here you can clearly see that Uvicorn does not have my handlers. If I now change the handler in the code, e.g. in the model load hook, then the logging also works for Uvicorn.

However, as soon as the workers are configured higher than 1: ("parallel_workers": >=1), it seems then that my log configuration was loaded correctly:

Logger: mlserver, Handler: console
Logger: mlserver, Formatter: <pythonjsonlogger.jsonlogger.JsonFormatter object at 0x7fc5c2391e70>
Logger: uvicorn.error, Handler: console
Logger: uvicorn.error, Formatter: <pythonjsonlogger.jsonlogger.JsonFormatter object at 0x7fc5c2391e70>
Logger: uvicorn, Handler: console
Logger: uvicorn, Formatter: <pythonjsonlogger.jsonlogger.JsonFormatter object at 0x7fc5c2391e70>
Logger: grpc, Handler: None
Logger: grpc, Formatter: None
Logger: kafka, Handler: None
Logger: kafka, Formatter: None
Logger: charset_normalizer, Handler: None
Logger: charset_normalizer, Formatter: None
Logger: uvicorn.access, Handler: console
Logger: uvicorn.access, Formatter: <pythonjsonlogger.jsonlogger.JsonFormatter object at 0x7fc5c2391e70>

But the worker that processes the requests or whatever does not seem to have this configuration. The usual Uvicorn format is logged again and the same applies when I try to change the handler via code. My log configuration will not work for this Uvicorn logger.

INFO:     127.0.0.1:51313 - "POST /v2/models/logger-model/infer HTTP/1.1" 200 OK <-- Its not in JSON printed

@adriangonz
Copy link
Contributor

Hey @JustinDroege ,

There were a number of fixes around this area MLServer 1.2.0. Can you confirm that's the version you're running?

@JustinDroege
Copy link
Author

Hey @adriangonz ,

i checked my version: 1.2.0

@adriangonz
Copy link
Contributor

Hey @JustinDroege ,

After having a deeper look into this one, we've finally managed to find the root cause: the metrics server (which is spun up separately), was "overriding" the logging settings with uvicorn's defaults (i.e. instead of respecting the custom config).

This should now be fixed in #911.

Once that PR gets merged, it would be great if you can give that a go installing it directly from master.

@JustinDroege
Copy link
Author

Thx that sounds amazing! I will test it tomorrow!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants