Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.Sign up
Logrotation on output log file causes empty log files #106
My logrotate.d script:
forever runs in daemon mode.
After logrotate runs overnight, it copies sends it off to the file fine, but forever then doesn't continue writing to the main log file.
It's most likely something up with my logrotate.d script, but any ideas?
Some daemon services like nginx answer to a signal like SIGUSR1 to reopen the log file, so we can use 'postrotate' in logrotate config file to do log rotating:
Maybe forever can have a similar implementation
I would have expected to be able to do this to manually rotate logs (process 0 logs to ~/.forever/foo.log):
Instead forever kept logging to foo.log.0, I suppose because it doesn't close then reopen log files on
It's not optimal, but
I have copytruncate and I'm still seeing this issue with firstname.lastname@example.org, email@example.com and this in my logrotate.d directory
all the forever logging moved to a file called cloud.log-20130704 (the first day it rotated) and each day a new empty file is created. cloud.log(the original) is zero. And of course a forever restart doesn't fix it - logging continues to the same file.
Any fixes or am I switching to another logging / daemon framework?
The feature is still needed and still useful (it's hard to bypass this: logrotate "copytruncate" is not perfect, replacing logrotate by something else is not viable for standard deployments, and restarting the server is not wanted in many cases).
The foreversd/forever-monitor/pull/16 PR is outdated, and the re opening of the fd were not atomic. With the current master it seems easier to do since we don't touch the child process fd directly, just read from them and pipe the data to a (file) stream.
The reload() in the link from @jfroffice is probably invalid because it ignores the fd returned by fs.openSync. However, using sync versions of close & open is one way to correctly implement an atomic reload. (another non-blocking way would maybe be to async open, then async close, and on the close callback set the newly open fd where it's used. I'm not sure it works...)
Also, writing to
Could a maintainer clarify whether this isn't understood to be a problem? The forever command-line utility seems to be useless to us in production, because we can't safely rotate its logs, and so have to resort to programmatic use of forever-monitor, so that we can configure the logging. But this issue has been open so long, perhaps I've misunderstood something that makes it a non-problem?
By end-to-end this example should assume
Having done server maintenance for several years, this is a common issue that is not specific to
While you appear to be logging to a file, you are really logging to a file descriptor. After log rotation by an external application, the application continues to log to a file descriptor, but now it is no longer connected with the file, which has been re-created through log rotation. While the log may be empty, your disk space may well be continuing to increase.
Possible solutions to log rotation complications
logrotate and copytruncate
Above there was a recommendation to use
Restart the app
Build log rotation into forever
You could submit a pull request which adds log rotation into
Log directly from your app over the network to syslog or a 3rd-party service.
This avoids the direct use of log files, but most of the options I've looked for this in Node.js share the same design flaw: They don't (or didn't recently) handle the "sad path" of the remote logging server being unavailable. If they coped with it at all, the solution was to put buffered records into an in-memory queue of unlimited size. Given enough logging or a long enough outage, memory would eventually fill up and things would crash. Limiting the buffer queue size would address that issue, but it illustrates a point: designing robust network services is hard. Your are likely busy building and maintaining your main application. Do you want to also be responsible for the memory, latency and CPU concerns of a network logging client embedded in your application?
For reference, here are the related bug reports I've opened about this
If you are using this a module that logs over the network directly, you might wish to check how it handles the possibility that the network or logging service is down.
Log to STDOUT and STDERR, use syslog
If your application simply logs to STDOUT and STDERR instead of a log file, then you've eliminated the problematic direct-use of logging files and created a foundation for something that specializes in logging to handle the logs.
I recommend reading the post Logs are Streams, Not Files which makes a good case for why you should log to STDOUT and shows how you can use pipe to logs to
Logging to STDOUT and STDERR is also considered a best practice in the App Container Spec. I expect to see more of this logging pattern as containerization catches on.
Log to STDOUT, use systemd
Systemd will be standard in future Ubuntu releases and is already standard in Fedora. CoreOs uses Systemd inside its container to handle process supervision and logging, but also because it starts in under a second.
How to Log to STDOUT effectively with forever?
About now, you may be looking at the
What you might hope works:
You can use the same approach to with the
You are not limited to using this syntax to pipe your logs to
Noticed that there was a new `/var/log/18f-pages-server/pages.log` file, but that the logs were still going to an uncompressed `/var/log/18f-pages-server/pages.log.1`. The old `postrotate` script wasn't actually successfully restarting the server, and a manual restart also didn't allow the new process to write to the new log file. Found out about the `copytruncate` directive, which will work well enough for 18F Pages, though it's not completely ideal. For more information, this issue has a very helpful comment with a ton of technical background: foreversd/forever#106 (comment)