New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
apiserver: fix leaks in logsink handler #6519
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once the checks succeed, LGTM \o/
Well done for finding and fixing :D
logStreamHandler := srv.trackRequests(newLogStreamEndpointHandler(strictCtxt)) | ||
debugLogHandler := srv.trackRequests(newDebugLogDBHandler(httpCtxt)) | ||
|
||
add("/model/:modeluuid/logsink", logSinkHandler) | ||
// TODO(axw) close the lumberjack writer. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is empty line needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
neither the empty line nor the TODO
MaxBackups: 2, | ||
}, | ||
// This isn't a fatal error so log and continue if priming fails. | ||
logger.Warningf("Unable to prime %s (proceeding anyway): %v", logPath, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Warning or Info?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Warning seems appropriate to me
If an error occurs when reading from the logsink client, the handler should close the connection. Also, close the lumberjack WriterCloser when the handler completes.
754c065
to
c578ef5
Compare
!!build!! |
|
Status: merge request accepted. Url: http://juju-ci.vapour.ws:8080/job/github-merge-juju |
apiserver: fix leaks in logsink handler If an error occurs when reading from the logsink client, the handler should close the connection. Also, close the lumberjack WriterCloser when the handler completes. Backport of #6519
While investigating https://bugs.launchpad.net/juju/+bug/1635311, I noticed a slow but steady increase in the number of goroutines in use by jujud. Taking a goroutine pprof snapshot, and then subsequently taking relative profiles, I found that there was >1 logsink handler. Since there's just one machine agent, this was unexpected.
A bug was introduced in 2.0 which causes the apiserver to not close out the websocket handler if there is a failure to read from the logsink client, except/until the apiserver.Server is stopped. If an error occurs when reading from the logsink client, the handler should close the connection.
Also, close the lumberjack WriterCloser when the apiserver stops.