New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems with own logging configuration #889
Comments
Hey @JustinDroege, If your logging config gets applied correctly, that would suggest an issue with the config itself. Having said that, it may be worth trying to run your model disabling parallel inference. This can be configured by setting the Just for context, when enabled, your model will run on a separate worker process. In the past, we've seen weird issues with logging on the worker pool. Most of those should be fixed now (in the |
@adriangonz Thx for your answer! Here is an example (my configuration is not perfect yet, have some double logs even with no propagation): 2022-12-09 10:55:31,365 [mlserver] INFO - Using asyncio event-loop policy: uvloop
2022-12-09 10:55:31,372 [mlserver] WARNING - Model name 'logger-model' is different than model's folder name 'models'.
{"asctime": "2022-12-09 10:55:31,388", "name": "uvicorn.error", "levelname": "INFO", "message": "Started server process [29102]", "color_message": "Started server process [\u001b[36m%d\u001b[0m]"}
{"asctime": "2022-12-09 10:55:31,390", "name": "uvicorn.error", "levelname": "INFO", "message": "Waiting for application startup."}
{"asctime": "2022-12-09 10:55:31,390", "name": "grpc._cython.cygrpc", "levelname": "DEBUG", "message": "Using AsyncIOEngine.POLLER as I/O engine"}
2022-12-09 10:55:31,395 [mlserver.metrics] INFO - Metrics server running on http://0.0.0.0:8082
{"asctime": "2022-12-09 10:55:31,395", "name": "mlserver.metrics", "levelname": "INFO", "message": "Metrics server running on http://0.0.0.0:8082"}
2022-12-09 10:55:31,395 [mlserver.metrics] INFO - Prometheus scraping endpoint can be accessed on http://0.0.0.0:8082/metrics
{"asctime": "2022-12-09 10:55:31,395", "name": "mlserver.metrics", "levelname": "INFO", "message": "Prometheus scraping endpoint can be accessed on http://0.0.0.0:8082/metrics"}
INFO: Started server process [29102]
{"asctime": "2022-12-09 10:55:31,395", "name": "uvicorn.error", "levelname": "INFO", "message": "Started server process [29102]", "color_message": "Started server process [\u001b[36m%d\u001b[0m]"}
INFO: Waiting for application startup.
{"asctime": "2022-12-09 10:55:31,395", "name": "uvicorn.error", "levelname": "INFO", "message": "Waiting for application startup."}
2022-12-09 10:55:31,395 [mlserver] INFO - Loaded model 'logger-model' succesfully.
{"asctime": "2022-12-09 10:55:31,395", "name": "mlserver", "levelname": "INFO", "message": "Loaded model 'logger-model' succesfully."}
INFO: Application startup complete.
{"asctime": "2022-12-09 10:55:31,395", "name": "uvicorn.error", "levelname": "INFO", "message": "Application startup complete."}
2022-12-09 10:55:31,397 [mlserver.grpc] INFO - gRPC server running on http://0.0.0.0:8234
{"asctime": "2022-12-09 10:55:31,397", "name": "mlserver.grpc", "levelname": "INFO", "message": "gRPC server running on http://0.0.0.0:8234"}
INFO: Application startup complete.
{"asctime": "2022-12-09 10:55:31,397", "name": "uvicorn.error", "levelname": "INFO", "message": "Application startup complete."}
INFO: Uvicorn running on http://0.0.0.0:8123 (Press CTRL+C to quit)
{"asctime": "2022-12-09 10:55:31,398", "name": "uvicorn.error", "levelname": "INFO", "message": "Uvicorn running on http://0.0.0.0:8123 (Press CTRL+C to quit)", "color_message": "Uvicorn running on \u001b[1m%s://%s:%d\u001b[0m (Press CTRL+C to quit)"}
INFO: Uvicorn running on http://0.0.0.0:8082 (Press CTRL+C to quit)
{"asctime": "2022-12-09 10:55:31,398", "name": "uvicorn.error", "levelname": "INFO", "message": "Uvicorn running on http://0.0.0.0:8082 (Press CTRL+C to quit)", "color_message": "Uvicorn running on \u001b[1m%s://%s:%d\u001b[0m (Press CTRL+C to quit)"}
INFO: 127.0.0.1:57532 - "POST /v2/models/logger-model/infer HTTP/1.1" 200 OK
INFO: 127.0.0.1:57532 - "POST /v2/models/logger-model/infer HTTP/1.1" 200 OK
INFO: 127.0.0.1:57532 - "POST /v2/models/logger-model/infer HTTP/1.1" 200 OK You can see, that the last logs are not formatted even when the uvicorn logs before was formatted. I also used "parallel_workers": 0 here |
Hey @adriangonz Im pretty sure that is a bug. I tried now different settings. It seems that when specifying the log configuration via the "logging_settings" parameter, it overwrites my own configuration at 0 workers ("parallel_workers": 0). Here is my log configuration: {
"version": 1,
"disable_existing_loggers": false,
"formatters": {
"json": {
"class": "pythonjsonlogger.jsonlogger.JsonFormatter",
"format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
}
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"level": "DEBUG",
"formatter": "json",
"stream": "ext://sys.stdout"
},
"error_console": {
"level": "DEBUG",
"class": "logging.StreamHandler",
"formatter": "json",
"stream": "ext://sys.stdout"
},
"access": {
"level": "DEBUG",
"class": "logging.StreamHandler",
"formatter": "json",
"stream": "ext://sys.stdout"
},
"default": {
"level": "DEBUG",
"class": "logging.StreamHandler",
"formatter": "json",
"stream": "ext://sys.stdout"
}
},
"loggers": {
"mlserver": {
"level": "DEBUG",
"handlers": ["console"],
"propagate": "no"
},
"uvicorn": {
"level": "DEBUG",
"handlers": ["console"],
"propagate": "no"
},
"uvicorn.error": {
"level": "DEBUG",
"handlers": ["console"],
"propagate": "no"
},
"uvicorn.access": {
"level": "DEBUG",
"handlers": ["console"],
"propagate": "no"
}
},
"root": {
"level": "DEBUG",
"handlers": ["console"],
"propagate": "no"
}
} You can see that I have actually configured each Uvicorn logger with its own handler.
Here you can clearly see that Uvicorn does not have my handlers. If I now change the handler in the code, e.g. in the model load hook, then the logging also works for Uvicorn. However, as soon as the workers are configured higher than 1: ("parallel_workers": >=1), it seems then that my log configuration was loaded correctly:
But the worker that processes the requests or whatever does not seem to have this configuration. The usual Uvicorn format is logged again and the same applies when I try to change the handler via code. My log configuration will not work for this Uvicorn logger.
|
Hey @JustinDroege , There were a number of fixes around this area MLServer |
Hey @adriangonz , i checked my version: 1.2.0 |
Hey @JustinDroege , After having a deeper look into this one, we've finally managed to find the root cause: the metrics server (which is spun up separately), was "overriding" the logging settings with uvicorn's defaults (i.e. instead of respecting the custom config). This should now be fixed in #911. Once that PR gets merged, it would be great if you can give that a go installing it directly from |
Thx that sounds amazing! I will test it tomorrow! |
Currently I have the problem that my logging configuration is not accepted everywhere. As soon as the REST server starts (Uvicorn Worker), my logging configuration is ignored. I have created a repo that represents my scenario and also which is configuration used. Maybe my configuration is just wrong. In the model itself, I print out all the loggers and the associated handlers and formatter and can see here that it should actually fit. Do you have any ideas?
Here is my small example repo: https://github.com/JustinDroege/mlserver-logging
The text was updated successfully, but these errors were encountered: