What had previously worked for us had suddenly stopped working. Running Flask 0.10.1 (and Flask-Restful) and Gunicorn 19.3.0, we set up a warning stream handler for errors. These errors were previously captured by Gunicorn, but were instead logging out to the Gunicorn process' stdout/stderr, instead of being captured to the access and error logs (which were and always have been configured).
In order to resolve this issue, we had to add the gunicorn.error logging facilities to the Flask app's handlers. Here's a snippet where we setup and then extend Flask's handlers in order to achieve "normal" logging by Gunicorn:
handler = logging.StreamHandler()
# fix gives access to the gunicorn error log facility
I am sorry that I do not fully understand the first part where you describe the problem and the change that happened.
Before, it used to go to gunicorn logs. In flask, we created a StreamHandler, level warning, added it to flask's logger. Our errors would go to the gunicorn errorlog.
Then, we found the error stream going to gunicorn's stdout, instead of the error log.
To route the errors to gunicorn's error log, now we must use getLogger('gunicorn.error'), and add the handlers to flask.
Did you change anything else? Using different options (like --preload) or anything like this? Do you know what Gunicorn version you were using before?
Oh, this was not a Gunicorn version update?
Same version of Gunicorn, same Gunicorn settings. Sorry, I must not be being clear about this. :)
We run many Flask apps via Gunicorn. These all log normally. We always create a StreamHandler in flask, add it to the Flask app's handlers; Gunicorn catches the errors logged to the StreamHandler and logs to the errorlog as appropriate.
The only thing that makes this app any different from the others is that this app is primarily built atop flask-restful. No other logging paradigms have changed -- except now, we must extend flask's handlers by capturing Gunicorns handlers, and explicitly pass them to flask. Otherwise, Gunicorn will not catch errors from the app into it's error log - instead, they go to stdout/stderr.
Here's the config:
if not hasattr(os, "sysconf"):
raise RuntimeError("No sysconf detected.")
app_name = 'nexpress'
host = '0.0.0.0'
port = 5035
bind = '%s:%s' % (host, port)
raw_env = ['GEVENT_RESOLVER=ares']
forwarded_allow_ips = "*"
workers = cpuCount() + 1
worker_class = 'gevent'
worker_connections = 1000
timeout = 30
keepalive = 2
debug = False
spew = False
daemon = False
preload_app = False
if os.path.exists('/tmp/%s.pid' % app_name):
pidfile = '/tmp/%s.pid' % app_name
pidfile = '/var/run/%s.pid' % app_name
umask = 0
user = 'webapp'
group = 'webapp'
tmp_upload_dir = None
errorlog = 'error.log'
loglevel = 'info'
accesslog = 'access.log'
proc_name = app_name
def post_fork(server, worker):
server.log.info("Worker spawned (pid: %s)", worker.pid)
def pre_fork(server, worker):
server.log.info("Forked child, re-executing.")
server.log.info("Server is ready. Spwawning workers")
not sure to understand the issue there. If you want to log the error using the gunicorn error handler, you could simply using the python Logging module and write to the gunicorn.error handler. The other way would be usine the WSGI environ wsgi.errors and print to it. Do I miss anything?
I'll try to help explain. Historically our flask apps have been configured to log following the comment that can be found at #379 (comment)
This has always worked well for us. When logging using app.logger.error or app.logger.exception the output going to stderr via that configuration always ended up in the gunicorn error.log file. This is what we have come to expect.
Recently, in a new application configured the same way (the only difference being that we are using flask-restful), these logged messages no longer showed up in the gunicorn error.log file. These logs are now being caught by upstart which handles running gunicorn for us.
I think the short of it is:
In order to achieve similar results, where we could use the logger in the flask app object, we added the handlers from gunicorn.error to app.logger.
No where has there ever been reference to gunicorn.error in our code before, but this somehow used to work for us.
In all of our applications, we are either using gunicorn 19.2.x or 19.3.x.
It looks like the issue/comment that I reference above, may actually give us the answer, but all the dates seem to be pretty far in the past, so I am unsure as to why we recently started seeing this issue.
Also, to explicitly ask; When using flask and wanting to use app.logger, what is your recommended way of ensuring that the errors logged via app.logger make it into the gunicorn error log?
1. why did this work previously?
No idea! That's why I was double-checking version numbers and things :). Thanks for your patience.
2. what would cause this to no longer work?
Again... no idea :(. But it sounds like it might be the expected behavior. :/
Here is where flask configures the application logger: https://github.com/mitsuhiko/flask/blob/6e77cd709e16f7d1d654f67a5fc82e54bceae432/flask/logging.py#L83
From that it looks like here, if there is no LOGGER_NAME set this will select the root logger, and remove the console handler gunicorn has added to it by default. Although, it adds its own handlers that log to stderror in production if logging is enabled for the app.
I also add a Streamlogger to my Flask apps and I don't have this problem. I'm using
stream_handler = logging.StreamHandler()
and the logs always go to Gunicorns stderr log.
@sivel were you using the errorlog setting in both apps?
Actually, forget that question. I misread.
At least I know how this is supposed to work now. I wonder if maybe you're running this app in debug?
Flask logs to the default StreamLogger in debug, and that goes to stderr: https://github.com/mitsuhiko/flask/blob/6e77cd709e16f7d1d654f67a5fc82e54bceae432/flask/logging.py#L65
Flask logs to the default StreamLogger in production, unless there's an active request with 'wsgi.errors' key in the environment and then it goes there.
Gunicorn sets wsgi.errors to be a wrapper that looks for a StreamHandler on the 'gunicorn.error' logger, which will be the file handler for the errorlog setting.
So, with app.debug, flask logs should go to stderr. Without it, they should go to 'gunicorn.error' logger.
Is this what you see?
I created this as a demo: https://gist.github.com/angstwad/527783c47a108c645cdd.
I provided the sample file, the output from my terminal window when running, tailing the error and access logs, and a pip freeze of my virtualenv.
Setting an error-logfile and and access-logfile when running this logs to stderr, not the error.log.
I described and linked to the various code paths that are involved in the expected behavior. If someone could step through their own code and see whether flask logs to wsgi.errors, see if that's going to gunicorn as intended, and see whether gunicorn is properly finding the stream to write to that would be the next step.
@tilgovi the part of Flask you are referring to is the dev (0.11) version but @angstwad is using 0.10.1 (same as me) where the 'LOGGING_HANDLER_POLICY' does not exist.
I did another test and I actually do have the same problem (I just never noticed because I don't use --error-log"): the flask stderr log always goes to gunicorn stderr, regardless of the --error-log setting.
I am not sure I understand this issue.
Gunicorn error log is here to log errors from Gunicorn, not from another application. We stopped to reroute logs from stdout awhile ago for that reason.
@romabysen he flask stderr log always goes to gunicorn stderr, regardless of the --error-log setting. . What do you mean precisely? Do you have any example of code that would help to figure it?
he flask stderr log always goes to gunicorn stderr, regardless of the --error-log setting.
bump. anything to add to this issue?
It's not clear actually if gunicorn is working s expected or if the user expectation is not achieved.By default, gunicorn should not reroute logs to stdout. If something is not working let me know.
This issue is getting in my way so much too! I never seem to manage to get gunicorn to log stuff properly. I had problems from both Django and Flask, where no matter what I tried I only get logs like:
2015-12-05 01:40:34  [DEBUG] POST /next
"184.108.40.206 - - [05/Dec/2015:01:40:34] "POST /next HTTP/1.0" 500 291 "-" "python-requests/2.7.0 CPython/3.5.0 Darwin/14.1.0"
No stack trace or anything that lets me figure out what the problem is. I tried to set Flask to log stuff to stdout, change log levels to DEBUG and a bunch of settings with the arcane Python log config format.
This is such basic functionality that I would really expect it to work more smoothly. In all other environments I worked in, e.g. Rails, it was just a no-brainer. Some things that I would expect to come out of the box:
If someone has working recipes for how a Django & Flask app should be "properly" configured to work this way, including that in the official Gunicorn docs would be so much help to me – and I assume other newcomers. I would also be very grateful for any tips on where to look for good working examples.
@metakermit actually did you configure it to print to wsgi.errors or gunicorn.errors handler? This will be the only way to have an application error displayed in gunicorn logs.
This has been changed 2 years ago: 4152318
And for the why: #309 (comment)
and some other comments at that time. Capturing STDOUT from the worker may introduce some issues with the current design. This can be changed when the new IPC architecture will take place
I don't know. I tried quite a few combinations and have just given up for now and I start the app using python manage.py runserver when I need to find errors. I find it a bit frustrating that there is no good canonical example of how to configure the two most popular Python web frameworks to show error logs to the developer. Having a collection of 10 commit / issue comment links I have to patch together to get some configuration that might still not work does not really help much, unfortunately. I appreciate the intention, though! If I catch some free time one day I'll try to figure this out and share with others.
python manage.py runserver
I am also having trouble with logging. I have these flags set:
--access-logfile /var/log/gunicorn/access.log --error-logfile /var/log/gunicorn/error.log
I am getting stuff in the error.log but it is not very helpful. When unexpected errors occur in production error.log shows that 'something' happened and that the gunicorn server has restarted itself but there is currently no way to tell what the exception was. Adding my own application level logging is not really a solution because I can only catch and log expected errors. I keep adding more logging hoping to catch the issue but what I really want is for gunicorn to catch unhandled errors that cause my app to crash and log them to error.log and not just tell me something bad occurred. Right now it seems impossible to get a stacktrace from an unexpected error. I have some mysterious problems occurring in my production environment and right now I have no idea what is going on--only that sometimes my process crashes and restarts. I am new to gunicorn so apologies if I am missing something obvious here, but right now I am super stuck.
@danielrmeyer You can increase the log level using the --log-leveloption
@benoitc Thanks, I will mess with that and post here if I fix my problem. I did see that option though (I'm running 19.4):
--log-level LEVEL The granularity of Error log outputs. [info]
I did not try changing it because it looks like it defaults to [info], which in my understanding would capture errors at info level and more severe levels like [error] level severity. Is my thinking on this wrong? In any case I'll explicitly set it and see what happens.
I did some experiments raising exceptions at various places in my code and was surprised to see the correct thing occur with the exceptions showing up in the log--so I think the gunicorn logging is working how I expect. When my mystery problem occurs (which I now am pretty sure is temporary loss of connectivity to the DB) this is what I see happen in the error log:
[2016-02-03 21:19:31 +0000]  [CRITICAL] WORKER TIMEOUT (pid:6488)
[2016-02-03 21:19:31 +0000]  [CRITICAL] WORKER TIMEOUT (pid:6489)
[2016-02-03 21:19:31 +0000]  [CRITICAL] WORKER TIMEOUT (pid:6490)
[2016-02-03 21:19:31 +0000]  [INFO] Worker exiting (pid: 6489)
[2016-02-03 21:19:31 +0000]  [CRITICAL] WORKER TIMEOUT (pid:6485)
[2016-02-03 21:19:31 +0000]  [INFO] Worker exiting (pid: 6490)
[2016-02-03 21:19:31 +0000]  [CRITICAL] WORKER TIMEOUT (pid:6487)
[2016-02-03 21:19:31 +0000]  [INFO] Worker exiting (pid: 6485)
[2016-02-03 21:19:31 +0000]  [INFO] Worker exiting (pid: 6488)
[2016-02-03 21:19:31 +0000]  [INFO] Worker exiting (pid: 6487)
[2016-02-03 21:19:32 +0000]  [INFO] Booting worker with pid: 6522
[2016-02-03 21:19:32 +0000]  [INFO] Booting worker with pid: 6523
[2016-02-03 21:19:32 +0000]  [INFO] Booting worker with pid: 6524
[2016-02-03 21:19:32 +0000]  [INFO] Booting worker with pid: 6526
[2016-02-03 21:19:32 +0000]  [INFO] Booting worker with pid: 6527
[2016-02-03 21:20:02 +0000]  [CRITICAL] WORKER TIMEOUT (pid:6523)
[2016-02-03 21:20:02 +0000]  [CRITICAL] WORKER TIMEOUT (pid:6524)
[2016-02-03 21:20:02 +0000]  [CRITICAL] WORKER TIMEOUT (pid:6522)
[2016-02-03 21:20:02 +0000]  [CRITICAL] WORKER TIMEOUT (pid:6526)
[2016-02-03 21:20:02 +0000]  [INFO] Worker exiting (pid: 6522)
[2016-02-03 21:20:02 +0000]  [CRITICAL] WORKER TIMEOUT (pid:6527)
[2016-02-03 21:20:02 +0000]  [INFO] Worker exiting (pid: 6527)
[2016-02-03 21:20:02 +0000]  [INFO] Worker exiting (pid: 6526)
[2016-02-03 21:20:02 +0000]  [INFO] Worker exiting (pid: 6524)
[2016-02-03 21:20:02 +0000]  [INFO] Worker exiting (pid: 6523)
[2016-02-03 21:20:02 +0000]  [INFO] Booting worker with pid: 6543
[2016-02-03 21:20:02 +0000]  [INFO] Booting worker with pid: 6545
[2016-02-03 21:20:02 +0000]  [INFO] Booting worker with pid: 6544
[2016-02-03 21:20:02 +0000]  [INFO] Booting worker with pid: 6546
[2016-02-03 21:20:02 +0000]  [INFO] Booting worker with pid: 6547
This is giving me the impression that something had crashed and an exception was being swallowed, so I've been fighting with logging thinking that was my issue; however, like I said when I put explicit 'raise Exception' in various places I see that gunicorn is in fact not swallowing the exception and logging it properly. So I am now thinking there is some behavior I don't understand going on. Still, I am mystified as to why all I see is evidence that the worker is restarting with no underlying stack trace pinpointing where in the code the exception was raised.
Folks, I am sorry for the noise here. The issue has nothing to do with gunicorn and gunicorn logging is working perfectly. If anyone else comes here with the same issue read about the gunicorn graceful restart feature. I was getting loss of db connectivity and gunicorn was restarting the workers before an exception was being thrown by my DB client. Again, I apologize for the noise here.
Gunicorn error log is here to log errors from Gunicorn, not from another application
Gunicorn error log is here to log errors from Gunicorn, not from another application
Please add that to the documentation! The only thing the docs have to say about the errorlog setting is "The Error log file to write to." which does not convey that information.
I started assembling an example Django app that would fulfil the requirements from my past comment as project fail-nicely-django. Hopefully there'll be a fail-nicely-flask too in the future when I get to it. Maybe someone finds this useful regarding the scattered pieces of info we mentioned in this issue thread. Suggestions/contributions welcome 😄
put a note that error logs are only errors from Gunicorn.
@metakermit I tweeted it :)
@benoitc cool, thanks for the help :)