New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Send log to multiple log drivers #17910

Open
jimmidyson opened this Issue Nov 11, 2015 · 60 comments

Comments

Projects
None yet
@jimmidyson
Copy link

jimmidyson commented Nov 11, 2015

I'd like to be able to demux logs to multiple log drivers. The use case for this is to log to disk with json-log driver so that docker logs (live streaming) works but also be able to send the logs to archive via e.g. fluentd log driver for longer term viewing/searching/filtering/etc.

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Nov 11, 2015

Perhaps viewing the fluentd logs is really the best option here (although supporting docker logs for other drivers than json-file and journald would be nice to have)

Using the json-file is really discouraged for production use, and can be quite resource-hungry for high-volume logging.

@jimmidyson

This comment has been minimized.

Copy link
Author

jimmidyson commented Nov 11, 2015

Agreed. Fluentd doesn't do any storage itself, just routing to various storage/processing backends so not really an option unless the backend supports log streaming. In my case (& Kubernetes examples) that's Elasticsearch normally which doesn't support streaming.

I didn't know that docker logs worked with journald - that would be a much better way to go & have rsyslog or something configured to route logs on from there perhaps.

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Nov 11, 2015

Yup, it was added in #13707. Forgot which release that was (sorry 😄)

@GordonTheTurtle

This comment has been minimized.

Copy link

GordonTheTurtle commented Nov 12, 2015

USER POLL

The best way to get notified when there are changes in this discussion is by clicking the Subscribe button in the top right.

The people listed below have appreciated your meaningfull discussion with a random +1:

@renanvicente

@crosbymichael

This comment has been minimized.

Copy link
Member

crosbymichael commented Nov 13, 2015

Would it be better to just log to something like syslog then have all the tools built around that forward to other locations? Its alot of overhead in the daemon to so something like this.

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Nov 15, 2015

I agree with @crosbymichael here; I'm not sure we want to add the extra complexity in the daemon for this.

@oncletom

This comment has been minimized.

Copy link

oncletom commented Nov 17, 2015

Agreed with @crosbymichael, more portable as any tool can tail the logfile based on one or many container IDs.

@jimmidyson

This comment has been minimized.

Copy link
Author

jimmidyson commented Nov 17, 2015

Can syslog driver still work with docker logs? I was under the impression that was only json-log & journald? I agree that your approach generally makes more sense though.

@oncletom

This comment has been minimized.

Copy link

oncletom commented Nov 17, 2015

@jimmidyson not at the moment, but you can still do tail -f /var/log/syslog | grep --line-buffered <container-id> (or /var/log/messages on Fedora/CentOS etc.).

docker logs could wrap that for us to fully embrace logging strategies but this would be only a sugar on top of your system habits.

@jimmidyson

This comment has been minimized.

Copy link
Author

jimmidyson commented Nov 17, 2015

The underlying issue I'm trying to solve is moving logs off of nodes as quickly as possible for archiving/filtering/searching/etc while retaining docker logs functionality for live viewing of logs. I guess some configuration around journald should do that for me but it would be good to support docker logs for all log drivers (fluentd comes to mind as one to support somehow for that).

@oncletom

This comment has been minimized.

Copy link

oncletom commented Nov 17, 2015

@jimmidyson rsyslog is your friend too :-)

@cpuguy83

This comment has been minimized.

Copy link
Contributor

cpuguy83 commented Nov 18, 2015

@jimmidyson supporting docker logs on a logging backend is highly dependent on the service being called.
Logging to syslog for instance has no defined way to actually to and read those logs.

@djsly

This comment has been minimized.

Copy link

djsly commented Jan 11, 2016

We have the same requirements, currently logs are pushed to log stash using the gelf driver but for quick debugging (example : kube-ui / docker logs) we would like to keep the functionality of the json-log driver. log-opt for json-log could be keeping 1-2 days of logs while log stash provide us with a long term archiving solution.

@mihasK

This comment has been minimized.

Copy link

mihasK commented Jan 22, 2016

Agreed with @djsly . I want to use gelf driver. I can set up logstash to log to a local file, and use tail -f for quick debugging, but I think there will be json-formatted log there, containing extra information like 'container_id', 'image_name', so it will harder to perceive information comparing with docker logs format.

@jotunskij

This comment has been minimized.

Copy link

jotunskij commented Mar 27, 2016

+1 for the same reasons as multiple people before me

@safanaj

This comment has been minimized.

Copy link

safanaj commented Apr 6, 2016

+1

@jdunmore

This comment has been minimized.

Copy link

jdunmore commented Apr 6, 2016

I hate leaving a me too comment, exactly the same problem here for aws cloudwatch logs, and not being able to user docker logs nor the kube-ui.

@cpuguy83

This comment has been minimized.

Copy link
Contributor

cpuguy83 commented Apr 6, 2016

Aren't there better tools for reading/parsing logs than docker logs?
On a prod system I would expect to be able to read all my logs in one place.

@jimmidyson

This comment has been minimized.

Copy link
Author

jimmidyson commented Apr 6, 2016

@cpuguy83 For archiving logs there are loads of great tools. The problem is live streaming logs which, even in a single place, ultimately usually stream straight from docker logs. Both archiving & live streaming are important for prod systems.

@cpuguy83

This comment has been minimized.

Copy link
Contributor

cpuguy83 commented Apr 6, 2016

@jimmidyson For live streaming you can do docker attach --no-stdin

@erindru

This comment has been minimized.

Copy link

erindru commented Apr 12, 2016

What about an "aggregate" logging driver thats sole purpose is to delegate to multiple other logging drivers? That way we could log to both json-log to retain docker logs functionality (and configure it to have a small max file size for live streaming) and also use something like GELF.

Is this a terrible idea?

@michaelajr

This comment has been minimized.

Copy link

michaelajr commented May 15, 2016

+1 for an aggregate diver delegating to multiple log drivers. That would give the most flexability. Did not know about "attach" for live-steam. Will try that. Thanks.

@uschtwill

This comment has been minimized.

Copy link

uschtwill commented May 31, 2016

Thanks for pointing that out, @cpuguy83, that's really all that's needed for me. Logs to Logstash via gelf, docker attach --no-stdin for development and debugging. Beautiful!

@mcandre

This comment has been minimized.

Copy link
Contributor

mcandre commented Jun 30, 2016

+1

4 similar comments
@pulserdd

This comment has been minimized.

Copy link

pulserdd commented Jul 11, 2016

+1

@bwnyasse

This comment has been minimized.

Copy link

bwnyasse commented Aug 5, 2016

+1

@averri

This comment has been minimized.

Copy link

averri commented Aug 13, 2016

+1

@dmavrin

This comment has been minimized.

Copy link

dmavrin commented Aug 17, 2016

+1

@szakasz

This comment has been minimized.

Copy link

szakasz commented Aug 29, 2016

I am not sure about whether only I am clumsy or not, but when I tried the praised docker attach --no-stdin "solution" and after checking the logs I pressed CTRL+C (like with docker logs) then the container exited. I terminated 5 containers so before realizing this side-effect. That's definitely not nice - even if it is described here: https://docs.docker.com/engine/reference/commandline/attach/ --> I should have pressed ctrl+p then ctrl+q.
Anyway it's also mentioned there that

Because of this, it is not recommended to run performance critical applications that generate a lot of output in the foreground over a slow client connection. Instead, users should use the docker logs command to get access to the logs.

@shane-axiom

This comment has been minimized.

Copy link

shane-axiom commented Aug 29, 2016

@szakasz Add --sig-proxy=false to avoid passing signals (e.g. ctrl c) to the container.

https://docs.docker.com/engine/reference/commandline/attach/

@evmin

This comment has been minimized.

Copy link

evmin commented Jan 23, 2017

For what it's worth, it looks like docker logs now supports the journald logging driver. You can then use fluentd to stream from journald to wherever (e.g Elasticsearch), while still maintaining the benefits of docker logs.

Tried that. Fluentd can not read journald reliably as yet.

@cpuguy83

This comment has been minimized.

Copy link
Contributor

cpuguy83 commented Apr 10, 2017

Is multiple drivers really needed or just the ability to be able to call docker logs when another driver is used?

By the way, logging plugins were just merged a multi-logger could be implemented there. I do not think we will add support for multiple drivers in the core, though we can look at a solution to enabling docker logs for all drivers.

@erindru

This comment has been minimized.

Copy link

erindru commented Apr 10, 2017

@cpuguy83 retaining the ability to use docker logs would solve my use case. Being still able to configure how many logs are retained locally would be crucial though, so you don't run the server out of space which is one of the main points of using a different logging driver in the first place

@cpuguy83

This comment has been minimized.

Copy link
Contributor

cpuguy83 commented Apr 10, 2017

@erindru I hope running out of space isn't the main case since we have rotation support 😄
Getting logs off an ephemeral machine though is pretty important.

@erindru

This comment has been minimized.

Copy link

erindru commented Apr 10, 2017

Sorry what I meant was, in order for docker logs to work the logs need to be local, right? So retaining the ability to configure the local log rotation in addition to configuring the specified logging driver would be crucial

@binman-docker

This comment has been minimized.

Copy link

binman-docker commented Apr 10, 2017

Multiple drivers would open up a lot of use cases and flexibility. I know we've talked about how complex it would be compared to a ring buffer or similar, but I think it's the "better" option long-term. Either would solve our specific use case though.

@simplesteph

This comment has been minimized.

Copy link

simplesteph commented Apr 10, 2017

Short term all I need is to do docker logs when my log is still shipping elsewhere!

@michaelajr

This comment has been minimized.

Copy link

michaelajr commented Apr 11, 2017

Exactly. having logs shipped off box is super important - but having docker logs available to quickly debug would be great.

@shane-axiom

This comment has been minimized.

Copy link

shane-axiom commented Apr 11, 2017

+1 the ability to use local docker logs when another driver is used solves my use case, especially if true multiple drivers could be handled via a plugin

@cpuguy83

This comment has been minimized.

Copy link
Contributor

cpuguy83 commented Apr 11, 2017

@shane-axiom Even if it take some time to get the feature into Docker, a plugin can declare that it supports reading logs, so this can wholly be handled by a plugin.

@sudo-bmitch

This comment has been minimized.

Copy link

sudo-bmitch commented Apr 11, 2017

I'm also looking for the ability to use docker logs with another plugin being used. Multiple plugin support or a meta logging plugin that sends to other logging plugins would be a nice to have, but not required for the environments I'm supporting right now.

@shurshun

This comment has been minimized.

Copy link

shurshun commented Apr 12, 2017

+1 for the ability to use docker logs with another plugin being used.

@1arrow

This comment has been minimized.

Copy link

1arrow commented May 11, 2017

+1

@gerardjp

This comment has been minimized.

Copy link

gerardjp commented Aug 17, 2017

+1 for the ability to use docker logs with another plugin being used.

@sampleref

This comment has been minimized.

Copy link

sampleref commented Aug 28, 2017

+1 for docker logs along with a logging driver

@weijiekoh

This comment has been minimized.

Copy link

weijiekoh commented Sep 11, 2017

+1. This would be extremely useful.

@eugenegordeiev

This comment has been minimized.

Copy link

eugenegordeiev commented Dec 15, 2017

+1. Its very difficult to debug without having docker logs available..

@Miserlou

This comment has been minimized.

Copy link

Miserlou commented Mar 14, 2018

This issue is cascading into downstream projects - for instance, since Docker is powering some Nomad tasks, there is no way to multiplex Nomad log streams with different drivers when using Docker-driven tasks. It'd be great if this feature got more attention.

@gbolo

This comment has been minimized.

Copy link

gbolo commented Aug 4, 2018

Sending logs to a remote server using one of the logging drivers while still retaining the local logs would be a great option to have

@cpuguy83

This comment has been minimized.

Copy link
Contributor

cpuguy83 commented Aug 4, 2018

This is available in Docker EE, btw.

@sudo-bmitch

This comment has been minimized.

Copy link

sudo-bmitch commented Aug 7, 2018

@cpuguy83 Is this planned for a Docker CE release soon, or will it remaining an EE only feature?

@cpuguy83

This comment has been minimized.

Copy link
Contributor

cpuguy83 commented Aug 8, 2018

@sudo-bmitch I can't answer that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment