Defining log-driver and log-opt when specifying pod in RC and Pod #15478

Open
ejemba opened this Issue Oct 12, 2015 · 55 comments

Comments

Projects
None yet
@ejemba

ejemba commented Oct 12, 2015

We need to be able to define the following options when specifying the pod definition in RC and Pod

--log-driver= Logging driver for container
--log-opt=[] Log driver options

These options should be settable at container level and have been introduced with Docker 1.8.

Since docker client lib support both options as well adding those options to the pod definition is now doable.

@timothysc

This comment has been minimized.

Show comment
Hide comment
@ncdc

This comment has been minimized.

Show comment
Hide comment
@ncdc

ncdc Oct 13, 2015

Member

Hmm, I think we'll probably want to be able to set this cluster-wide as a default, and then maybe allow specific pod definitions to override.

cc @sosiouxme @smarterclayton @liggitt @jwhonce @jcantrill @bparees @jwforres

Member

ncdc commented Oct 13, 2015

Hmm, I think we'll probably want to be able to set this cluster-wide as a default, and then maybe allow specific pod definitions to override.

cc @sosiouxme @smarterclayton @liggitt @jwhonce @jcantrill @bparees @jwforres

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Oct 13, 2015

Contributor

Can you describe how you would leverage this on a per container basis (use case)? We traditionally do not expose Docker specific options directly in containers unless they can be cleanly abstracted across runtimes. Knowing how you would want to use this will help justify it.

Contributor

smarterclayton commented Oct 13, 2015

Can you describe how you would leverage this on a per container basis (use case)? We traditionally do not expose Docker specific options directly in containers unless they can be cleanly abstracted across runtimes. Knowing how you would want to use this will help justify it.

@sosiouxme

This comment has been minimized.

Show comment
Hide comment
@sosiouxme

sosiouxme Oct 13, 2015

Note that docker logs still only support json-file and journald drivers, though I imagine that list could expand.

Perhaps what users would actually want is a selection of defined log writing endpoints, not exposure to the logging driver details.

Note that docker logs still only support json-file and journald drivers, though I imagine that list could expand.

Perhaps what users would actually want is a selection of defined log writing endpoints, not exposure to the logging driver details.

@ejemba

This comment has been minimized.

Show comment
Hide comment
@ejemba

ejemba Oct 13, 2015

@ncdc @smarterclayton I agree with both of you, after reconsidering our use case in internal, it turns out that

  1. Our primary need is to protect our nodes. We send the logs to a log server but if it fails, logs fallback on docker internal logs. In such case, to prevent node saturation we need a cluster wide behaviour for docker log
  2. Exposing specific docker options in the pod/Rc definitions is not a good idea as @smarterclayton suggested it. We also agree with an abstraction allowing definition of high level log behaviour if possible
  3. Another option is making change on kubelet configuration files and code to handle such log behavior

ejemba commented Oct 13, 2015

@ncdc @smarterclayton I agree with both of you, after reconsidering our use case in internal, it turns out that

  1. Our primary need is to protect our nodes. We send the logs to a log server but if it fails, logs fallback on docker internal logs. In such case, to prevent node saturation we need a cluster wide behaviour for docker log
  2. Exposing specific docker options in the pod/Rc definitions is not a good idea as @smarterclayton suggested it. We also agree with an abstraction allowing definition of high level log behaviour if possible
  3. Another option is making change on kubelet configuration files and code to handle such log behavior
@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Oct 13, 2015

Contributor

The changes to the salt templates to make this a default should not be
terribly difficult. It's really just proper daemon configuration (and
dealing with any changes to log aggregation via fluentd by virtue of
selecting a different source)

On Tue, Oct 13, 2015 at 10:55 AM, Epo Jemba notifications@github.com
wrote:

@ncdc https://github.com/ncdc @smarterclayton
https://github.com/smarterclayton I agree with both of you, after
reconsidering our use case in internal, it turns out that

  1. Our primary need is to protect our nodes. We send the logs to a log
    server but if it fails, logs fallback on docker internal logs. In such
    case, to prevent node saturation we need a cluster wide behaviour for
    docker log
  2. Exposing specific docker options in the pod/Rc definitions is not a
    good idea as @smarterclayton https://github.com/smarterclayton
    suggested it. We also agree with an abstraction allowing definition of high
    level log behaviour if possible
  3. Another option is making change on kubelet configuration files and
    code to handle such log behavior


Reply to this email directly or view it on GitHub
#15478 (comment)
.

Contributor

smarterclayton commented Oct 13, 2015

The changes to the salt templates to make this a default should not be
terribly difficult. It's really just proper daemon configuration (and
dealing with any changes to log aggregation via fluentd by virtue of
selecting a different source)

On Tue, Oct 13, 2015 at 10:55 AM, Epo Jemba notifications@github.com
wrote:

@ncdc https://github.com/ncdc @smarterclayton
https://github.com/smarterclayton I agree with both of you, after
reconsidering our use case in internal, it turns out that

  1. Our primary need is to protect our nodes. We send the logs to a log
    server but if it fails, logs fallback on docker internal logs. In such
    case, to prevent node saturation we need a cluster wide behaviour for
    docker log
  2. Exposing specific docker options in the pod/Rc definitions is not a
    good idea as @smarterclayton https://github.com/smarterclayton
    suggested it. We also agree with an abstraction allowing definition of high
    level log behaviour if possible
  3. Another option is making change on kubelet configuration files and
    code to handle such log behavior


Reply to this email directly or view it on GitHub
#15478 (comment)
.

@halr9000

This comment has been minimized.

Show comment
Hide comment
@halr9000

halr9000 May 12, 2016

👍

Note that there are now 9 logging drivers. What's the consensus on getting this one in?

👍

Note that there are now 9 logging drivers. What's the consensus on getting this one in?

@briangebala

This comment has been minimized.

Show comment
Hide comment

+1

@obeattie

This comment has been minimized.

Show comment
Hide comment
@obeattie

obeattie May 18, 2016

In case anyone isn't aware, you can define the default log driver on a per-node basis with a flag to the Docker daemon (--log-driver). In my environment, I set the driver to journald this way. I struggle to think of a use-case for overriding this on a per-container basis to be honest.

In case anyone isn't aware, you can define the default log driver on a per-node basis with a flag to the Docker daemon (--log-driver). In my environment, I set the driver to journald this way. I struggle to think of a use-case for overriding this on a per-container basis to be honest.

@timothysc

This comment has been minimized.

Show comment
Hide comment
@timothysc

timothysc May 18, 2016

Member

Most clustering will not want their logs going "out-of-band", so what is the feature enablement that this would provide.

Also, from an ops perspective it looks like a loss of control. Currently we set the defaults and configure a logging stack to aggregate.

Member

timothysc commented May 18, 2016

Most clustering will not want their logs going "out-of-band", so what is the feature enablement that this would provide.

Also, from an ops perspective it looks like a loss of control. Currently we set the defaults and configure a logging stack to aggregate.

@jnardiello

This comment has been minimized.

Show comment
Hide comment
@jnardiello

jnardiello May 23, 2016

+1 on this.
Not being able to control how docker logging is handled implies that the only sane logging option is using the tools shipped with k8s, which is an incredible limitation.

@timothysc here our use-case. We have a complex dynamic infrastructure (~100 machines) with a lot of existing services running on them, with our own logstash to gather logs. Well, we are now trying to move our services, one by one, to k8s and to me there seems to be no clean way to integrate logging between our existing infrastructure and containers clustered on k8s.

K8S is extremely opinionated on how you gather logs. This might be great for whoever is starting from scratch on a simple infrastructure. For everyone else working on complex infrastructures which would not mind to dive deep and to implement a custom logging mechanism, there is simply not way to do it at the moment, which is quite frustrating.

Hopefully, it makes sense.

+1 on this.
Not being able to control how docker logging is handled implies that the only sane logging option is using the tools shipped with k8s, which is an incredible limitation.

@timothysc here our use-case. We have a complex dynamic infrastructure (~100 machines) with a lot of existing services running on them, with our own logstash to gather logs. Well, we are now trying to move our services, one by one, to k8s and to me there seems to be no clean way to integrate logging between our existing infrastructure and containers clustered on k8s.

K8S is extremely opinionated on how you gather logs. This might be great for whoever is starting from scratch on a simple infrastructure. For everyone else working on complex infrastructures which would not mind to dive deep and to implement a custom logging mechanism, there is simply not way to do it at the moment, which is quite frustrating.

Hopefully, it makes sense.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton May 23, 2016

Contributor

So in your scenario logs are truly "per application", but you have to
ensure the underlying host supports those logs? That's the concern we're
discussing here - either we do cluster level, or node level, but if we do
pod level, then the scheduler would have to be aware of what log drivers
are present where. As much as possible we try to avoid that.

On Mon, May 23, 2016 at 10:50 AM, Jacopo Nardiello <notifications@github.com

wrote:

+1 on this.
Not being able to control how docker logging is handled implies that the
only sane logging option is using the tools shipped with k8s, which is an
incredible limitation.

@timothysc https://github.com/timothysc here our use-case. We have a
complex dynamic infrastructure (~100 machines) with a lot of existing
services running on them, with our own logstash to gather logs. Well, we
are now trying to move our services, one by one, to k8s and to me there
seems to be no clean way to integrate logging between our existing
infrastructure and containers clustered on k8s.

K8S is extremely opinionated on how you gather logs. This might be great
for whoever is starting from scratch on a simple infrastructure. For
everyone else working on complex infrastructures which would not mind to
dive deep and to implement a custom logging mechanism, there is simply not
way to do it at the moment, which is quite frustrating.

Hopefully, it makes sense.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#15478 (comment)

Contributor

smarterclayton commented May 23, 2016

So in your scenario logs are truly "per application", but you have to
ensure the underlying host supports those logs? That's the concern we're
discussing here - either we do cluster level, or node level, but if we do
pod level, then the scheduler would have to be aware of what log drivers
are present where. As much as possible we try to avoid that.

On Mon, May 23, 2016 at 10:50 AM, Jacopo Nardiello <notifications@github.com

wrote:

+1 on this.
Not being able to control how docker logging is handled implies that the
only sane logging option is using the tools shipped with k8s, which is an
incredible limitation.

@timothysc https://github.com/timothysc here our use-case. We have a
complex dynamic infrastructure (~100 machines) with a lot of existing
services running on them, with our own logstash to gather logs. Well, we
are now trying to move our services, one by one, to k8s and to me there
seems to be no clean way to integrate logging between our existing
infrastructure and containers clustered on k8s.

K8S is extremely opinionated on how you gather logs. This might be great
for whoever is starting from scratch on a simple infrastructure. For
everyone else working on complex infrastructures which would not mind to
dive deep and to implement a custom logging mechanism, there is simply not
way to do it at the moment, which is quite frustrating.

Hopefully, it makes sense.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#15478 (comment)

@jnardiello

This comment has been minimized.

Show comment
Hide comment
@jnardiello

jnardiello May 26, 2016

@smarterclayton I do understand about your concerns and they are well placed. I'm not sure if the whole cluster has to be aware of the existence of pod-level logging, what I think we should do is giving the option to log pod stdout/stderr somewhere (a file based on their current pod name?) so that anyone willing to implement their custom solution, would have a persisted place where to get the content. This opens up a HUGE chapter though as logrotation is not trivial.

These are just my two cents, but we can't pretend that real-world complex scenarios just give up their existing logging infrastructure.

@smarterclayton I do understand about your concerns and they are well placed. I'm not sure if the whole cluster has to be aware of the existence of pod-level logging, what I think we should do is giving the option to log pod stdout/stderr somewhere (a file based on their current pod name?) so that anyone willing to implement their custom solution, would have a persisted place where to get the content. This opens up a HUGE chapter though as logrotation is not trivial.

These are just my two cents, but we can't pretend that real-world complex scenarios just give up their existing logging infrastructure.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton May 26, 2016

Contributor

Are you specifying custom log options per application? How many different
sets of log options would you have per cluster? If there are small sets of
config, an option would be to support an annotation on pods that is
correlated to node level config that offers a number of "standard log
options". I.e. at kubelet launch time define a "log mode X" (which defines
custom log options and driver), and the pod would specify "
pod.alpha.kubernetes.io/log.mode=X".

Yet another option would be that we expose a way to let deployers have the
opportunity to mutate the container definition immediately before we start
the container. That's harder today because we'd have to serialize the
docker def out to an intermediate format, execute it, and then run it
again, but potentially easier in the future.

Finally, we could expose key value pairs on the container interface that
are passed to the container engine directly, offer no API guarantees for
them, and ensure PodSecurityPolicy can regulate those options. That would
be the escape hatch for callers, but we wouldn't be able to provide any
guarantee those would continue to work across releases.

On Thu, May 26, 2016 at 5:34 AM, Jacopo Nardiello notifications@github.com
wrote:

@smarterclayton https://github.com/smarterclayton I do understand about
your concerns and they are well placed. I'm not sure if the whole cluster
has to be aware of the existence of pod-level logging, what I think we
should do is giving the option to log pod stdout/stderr somewhere (a file
based on their current pod name?) so that anyone willing to implement their
custom solution, would have a persisted place where to get the content.
This opens up a HUGE chapter though as logrotation is not trivial.

These are just my two cents, but we can't pretend that real-world complex
scenarios just give up their existing logging infrastructure.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#15478 (comment)

Contributor

smarterclayton commented May 26, 2016

Are you specifying custom log options per application? How many different
sets of log options would you have per cluster? If there are small sets of
config, an option would be to support an annotation on pods that is
correlated to node level config that offers a number of "standard log
options". I.e. at kubelet launch time define a "log mode X" (which defines
custom log options and driver), and the pod would specify "
pod.alpha.kubernetes.io/log.mode=X".

Yet another option would be that we expose a way to let deployers have the
opportunity to mutate the container definition immediately before we start
the container. That's harder today because we'd have to serialize the
docker def out to an intermediate format, execute it, and then run it
again, but potentially easier in the future.

Finally, we could expose key value pairs on the container interface that
are passed to the container engine directly, offer no API guarantees for
them, and ensure PodSecurityPolicy can regulate those options. That would
be the escape hatch for callers, but we wouldn't be able to provide any
guarantee those would continue to work across releases.

On Thu, May 26, 2016 at 5:34 AM, Jacopo Nardiello notifications@github.com
wrote:

@smarterclayton https://github.com/smarterclayton I do understand about
your concerns and they are well placed. I'm not sure if the whole cluster
has to be aware of the existence of pod-level logging, what I think we
should do is giving the option to log pod stdout/stderr somewhere (a file
based on their current pod name?) so that anyone willing to implement their
custom solution, would have a persisted place where to get the content.
This opens up a HUGE chapter though as logrotation is not trivial.

These are just my two cents, but we can't pretend that real-world complex
scenarios just give up their existing logging infrastructure.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#15478 (comment)

@ncdc

This comment has been minimized.

Show comment
Hide comment
Member

ncdc commented May 26, 2016

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton May 26, 2016

Contributor

No, thanks. Moving discussion there.

On Thu, May 26, 2016 at 11:23 AM, Andy Goldstein notifications@github.com
wrote:

@smarterclayton https://github.com/smarterclayton have you seen #24677
(comment)
#24677 (comment)


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#15478 (comment)

Contributor

smarterclayton commented May 26, 2016

No, thanks. Moving discussion there.

On Thu, May 26, 2016 at 11:23 AM, Andy Goldstein notifications@github.com
wrote:

@smarterclayton https://github.com/smarterclayton have you seen #24677
(comment)
#24677 (comment)


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#15478 (comment)

@pbthorste

This comment has been minimized.

Show comment
Hide comment
@pbthorste

pbthorste Nov 29, 2016

Hi there,
I think this is an important feature that should be considered for kubernetes.
Enabling the use of Docker's log driver can solve some non-trivial problems.

I would say that logging to disk is an anti-pattern. Logs are inherently "state", and should preferably not be saved to disk. Shipping the logs directly from a container to a repository solves many problems.

Setting the log driver would mean that the kubectl logs command can not show anything anymore.
While that feature is "nice to have" - the feature won't be needed when the logs are available from a different source.

Docker already has log drivers for google cloud (gcplogs) and Amazon (awslogs). While it is possible to set them on the Docker daemon itself, that has many drawbacks. By being able to set the two docker options:

--log-driver= Logging driver for container
--log-opt=[] Log driver options

It would be possible to send along labels (for gcplogs) or awslogs-group (for awslogs)
specific to a pod. That would make it easy to find the logs at the other end.

I have been reading up on how people are handling logs in kubernetes. Many seem to set up some elaborate scrapers that forward the logs to central systems. Being able to set the log driver will make that unnecessary - freeing up time to work on more interesting things :)

Hi there,
I think this is an important feature that should be considered for kubernetes.
Enabling the use of Docker's log driver can solve some non-trivial problems.

I would say that logging to disk is an anti-pattern. Logs are inherently "state", and should preferably not be saved to disk. Shipping the logs directly from a container to a repository solves many problems.

Setting the log driver would mean that the kubectl logs command can not show anything anymore.
While that feature is "nice to have" - the feature won't be needed when the logs are available from a different source.

Docker already has log drivers for google cloud (gcplogs) and Amazon (awslogs). While it is possible to set them on the Docker daemon itself, that has many drawbacks. By being able to set the two docker options:

--log-driver= Logging driver for container
--log-opt=[] Log driver options

It would be possible to send along labels (for gcplogs) or awslogs-group (for awslogs)
specific to a pod. That would make it easy to find the logs at the other end.

I have been reading up on how people are handling logs in kubernetes. Many seem to set up some elaborate scrapers that forward the logs to central systems. Being able to set the log driver will make that unnecessary - freeing up time to work on more interesting things :)

@daniilyar

This comment has been minimized.

Show comment
Hide comment
@daniilyar

daniilyar Dec 12, 2016

I can also add that some people, including me, want to perform docker logs rotation via '--log-opt max-size' option on JSON logging driver (which is native to docker) instead of setting up logrotate on the host. So, even exposing just the '--log-opt' option would be appreciated

daniilyar commented Dec 12, 2016

I can also add that some people, including me, want to perform docker logs rotation via '--log-opt max-size' option on JSON logging driver (which is native to docker) instead of setting up logrotate on the host. So, even exposing just the '--log-opt' option would be appreciated

@barnettZQG

This comment has been minimized.

Show comment
Hide comment
@barnettZQG

barnettZQG Dec 12, 2016

I have modified the k8s, when creating container configuration LogConfig.

I have modified the k8s, when creating container configuration LogConfig.

@defat

This comment has been minimized.

Show comment
Hide comment
@defat

defat Dec 27, 2016

+1
Using docker log driver for centralized log collection looks much simpler then creating symbolic links for log files, mounting them to a special fluentd container, tailing them and managing log rotation.

defat commented Dec 27, 2016

+1
Using docker log driver for centralized log collection looks much simpler then creating symbolic links for log files, mounting them to a special fluentd container, tailing them and managing log rotation.

@et304383

This comment has been minimized.

Show comment
Hide comment
@et304383

et304383 Jan 12, 2017

Use case for per container configuration: I want to log elsewhere or differently for containers I deploy and I don't care about (or want to change) the log driver for the standard containers necessary to run Kubernetes.

There you go. Please make this happen.

et304383 commented Jan 12, 2017

Use case for per container configuration: I want to log elsewhere or differently for containers I deploy and I don't care about (or want to change) the log driver for the standard containers necessary to run Kubernetes.

There you go. Please make this happen.

@xmik

This comment has been minimized.

Show comment
Hide comment
@xmik

xmik Jan 12, 2017

Another idea is, where all the containers still forward logs into the same endpoint, but you can at least set different fields values for different docker containers on your log server.

This would work for the gelf docker driver, if we could ensure docker containers created by Kubernetes are custom labelled. Meaning: some of a Pod fields could be forwarded as docker container labels. (Maybe this is already possible but I don't know how to achieve that).

Example without Kubernetes, only with docker daemon and gelf driver. Have docker daemon configured with: --log-driver=gelf --log-opt labels=env,label2 and create a docker container:

docker run -dti --label env=testing --label label2=some_value alpine:3.4 /bin/sh -c "while true; do date; sleep 2; done"

and another docker container:

docker run -dti --label env=production --label label2=some_value alpine:3.4 /bin/sh -c "while true; do date; sleep 2; done"

This way, on Graylog, you can differenciate between env=production and env=testing containers.

Currently I use such docker daemon options:

--log-driver=gelf --log-opt gelf-address=udp://graylog.example.com:12201 --log-opt tag=k8s-testing --log-opt labels=io.kubernetes.pod.namespace,io.kubernetes.container.name,io.kubernetes.pod.name

xmik commented Jan 12, 2017

Another idea is, where all the containers still forward logs into the same endpoint, but you can at least set different fields values for different docker containers on your log server.

This would work for the gelf docker driver, if we could ensure docker containers created by Kubernetes are custom labelled. Meaning: some of a Pod fields could be forwarded as docker container labels. (Maybe this is already possible but I don't know how to achieve that).

Example without Kubernetes, only with docker daemon and gelf driver. Have docker daemon configured with: --log-driver=gelf --log-opt labels=env,label2 and create a docker container:

docker run -dti --label env=testing --label label2=some_value alpine:3.4 /bin/sh -c "while true; do date; sleep 2; done"

and another docker container:

docker run -dti --label env=production --label label2=some_value alpine:3.4 /bin/sh -c "while true; do date; sleep 2; done"

This way, on Graylog, you can differenciate between env=production and env=testing containers.

Currently I use such docker daemon options:

--log-driver=gelf --log-opt gelf-address=udp://graylog.example.com:12201 --log-opt tag=k8s-testing --log-opt labels=io.kubernetes.pod.namespace,io.kubernetes.container.name,io.kubernetes.pod.name
@stainboy

This comment has been minimized.

Show comment
Hide comment
@stainboy

stainboy Jan 26, 2017

@xmik , just what to confirm it is an existing feature or your proposal regarding

Currently I use such docker daemon options:

--log-driver=gelf --log-opt gelf-address=udp://graylog.example.com:12201 --log-opt tag=k8s-testing --log-opt labels=io.kubernetes.pod.namespace,io.kubernetes.container.name,io.kubernetes.pod.name

stainboy commented Jan 26, 2017

@xmik , just what to confirm it is an existing feature or your proposal regarding

Currently I use such docker daemon options:

--log-driver=gelf --log-opt gelf-address=udp://graylog.example.com:12201 --log-opt tag=k8s-testing --log-opt labels=io.kubernetes.pod.namespace,io.kubernetes.container.name,io.kubernetes.pod.name
@xmik

This comment has been minimized.

Show comment
Hide comment
@xmik

xmik Jan 26, 2017

Those docker daemon options I currently use, already work. Kubernetes already sets some labels for each docker container. For example, when running docker inspect on kube-apiserver container:

 "Labels": {
   "io.kubernetes.container.hash": "4959a3f5",
   "io.kubernetes.container.name": "kube-apiserver",
   "io.kubernetes.container.ports": "[{\"name\":\"https\",\"hostPort\":6443,\"containerPort\":6443,\"protocol\":\"TCP\"},{\"name\":\"local\",\"hostPort\":8080,\"containerPort\":8080,\"protocol\":\"TCP\"}]",
   "io.kubernetes.container.restartCount": "1",
   "io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
   "io.kubernetes.pod.name": "kube-apiserver-k8s-production-master-1",
   "io.kubernetes.pod.namespace": "kube-system",
   "io.kubernetes.pod.terminationGracePeriod": "30",
   "io.kubernetes.pod.uid": "a47396d9dae12c81350569f56aea562e"
}

Hence, those docker daemon options work.

However, I think it is not possible now to make Kubernetes set custom labels on a docker container basing on Pod spec. So e.g. --log-driver=gelf --log-opt labels=env,label2 does not work.

xmik commented Jan 26, 2017

Those docker daemon options I currently use, already work. Kubernetes already sets some labels for each docker container. For example, when running docker inspect on kube-apiserver container:

 "Labels": {
   "io.kubernetes.container.hash": "4959a3f5",
   "io.kubernetes.container.name": "kube-apiserver",
   "io.kubernetes.container.ports": "[{\"name\":\"https\",\"hostPort\":6443,\"containerPort\":6443,\"protocol\":\"TCP\"},{\"name\":\"local\",\"hostPort\":8080,\"containerPort\":8080,\"protocol\":\"TCP\"}]",
   "io.kubernetes.container.restartCount": "1",
   "io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
   "io.kubernetes.pod.name": "kube-apiserver-k8s-production-master-1",
   "io.kubernetes.pod.namespace": "kube-system",
   "io.kubernetes.pod.terminationGracePeriod": "30",
   "io.kubernetes.pod.uid": "a47396d9dae12c81350569f56aea562e"
}

Hence, those docker daemon options work.

However, I think it is not possible now to make Kubernetes set custom labels on a docker container basing on Pod spec. So e.g. --log-driver=gelf --log-opt labels=env,label2 does not work.

k8s-merge-robot added a commit that referenced this issue Feb 27, 2017

Merge pull request #40634 from Crassirostris/use-docker-log-rotation
Automatic merge from submit-queue

Use docker log rotation mechanism instead of logrotate

This is a solution for #38495.

Instead of rotating logs using logrotate tool, which is configured quite rigidly, this PR makes docker responsible for the rotation and makes it possible to configure docker logging parameters. It solves the following problems:

* Logging agent will stop loosing lines upon rotation
* Container's logs size will be more strictly constrained. Instead of checking the size hourly, size will be checked upon write, preventing #27754

It's still far from ideal, for example setting logging options per pod, as suggested in #15478 would be much more flexible, but latter approach requires deep changes, including changes in API, which may be in vain because of CRI and long-term vision for logging.

Changes include:

* Change in salt. It's possible to configure docker log parameters, using variables in pillar. They're exported from env variables on `gce`, but for different cloud provider they have to be exported first.
* Change in `configure-helper.sh` scripts for those os on `gce` that don't use salt + default values exposed via env variables

This change may be problematic for kubelet logs functionality with CRI enabled, that will be tackled in the follow-up PR, if confirmed.

CC @piosz @Random-Liu @yujuhong @dashpole @dchen1107 @vishh @kubernetes/sig-node-pr-reviews

```release-note
On GCI by default logrotate is disabled for application containers in favor of rotation mechanism provided by docker logging driver.
```
@beldpro-ci

This comment has been minimized.

Show comment
Hide comment
@beldpro-ci

beldpro-ci Mar 18, 2017

Are there any news on this front? Having the ability to specify the labels and then take advantage of --log-opt labels<> would be pretty good!

Are there any news on this front? Having the ability to specify the labels and then take advantage of --log-opt labels<> would be pretty good!

@pweil-

This comment has been minimized.

Show comment
Hide comment
@pweil-

pweil- Mar 31, 2017

Member

@portante @jcantrill Just to capture it here because we discussed it, here is the use case that we were thinking this might be useful for:

When the log recording pods starts encountering and logging errors the infra that gathers those errors will grab them and feed them back to the recording mechanism which in turn throws and logs more errors.

This feedback loop can be avoided by using filtering mechanisms but that is a bit brittle. Using a different logging driver to record to a file and have rotation options seems like it would be a good solution.

Member

pweil- commented Mar 31, 2017

@portante @jcantrill Just to capture it here because we discussed it, here is the use case that we were thinking this might be useful for:

When the log recording pods starts encountering and logging errors the infra that gathers those errors will grab them and feed them back to the recording mechanism which in turn throws and logs more errors.

This feedback loop can be avoided by using filtering mechanisms but that is a bit brittle. Using a different logging driver to record to a file and have rotation options seems like it would be a good solution.

@caarlos0

This comment has been minimized.

Show comment
Hide comment
@caarlos0

caarlos0 May 17, 2017

Contributor

My 2 cents.

Current solutions to logging inside k8s are (AFAIK):

  • sidecar container sending logs somewhere
  • replication controller sending all logs somewhere
  • the container itself sending logs somewhere

Sidecar container seems kind of overkill for me. The replication controller strategy seems good, but it mix logs of containers from all deployments, and some users might now want that, and may instead want to log each app to a different thing. For this cases, the last option works best IMHO, but creates a lot of code replicated in all containers (eg: install and setup logentries daemon).

This all would be way easier if we had access to log-driver flags, so each deployment would define how it should be logged, using docker native features.

I can try to implement that, but will probably need some help - as I'm not familiar to kubernetes codebase.

Contributor

caarlos0 commented May 17, 2017

My 2 cents.

Current solutions to logging inside k8s are (AFAIK):

  • sidecar container sending logs somewhere
  • replication controller sending all logs somewhere
  • the container itself sending logs somewhere

Sidecar container seems kind of overkill for me. The replication controller strategy seems good, but it mix logs of containers from all deployments, and some users might now want that, and may instead want to log each app to a different thing. For this cases, the last option works best IMHO, but creates a lot of code replicated in all containers (eg: install and setup logentries daemon).

This all would be way easier if we had access to log-driver flags, so each deployment would define how it should be logged, using docker native features.

I can try to implement that, but will probably need some help - as I'm not familiar to kubernetes codebase.

@kfox1111

This comment has been minimized.

Show comment
Hide comment
@kfox1111

kfox1111 May 17, 2017

once multi tenancy becomes more of a thing, it will be harder to solve properly.

Each namespace may be a different tenant so logs from each should not necessarily be aggregated, but allowed to be sent to tenant specified locations.

I can think of a few ways of doing this:

  1. make a new volume type, container-logs. This allows a daemonset launched by a particular namespace to access just the logs from its own containers. They can then send the logs with whatever log shipper of choice to whichever storage daemon of choice.
  2. Modify one of(or more) of the log shippers, such as fluentd-bit to read the namespace the pod is in, and redirect logs from each pod to a further log shipper running in that namespace as a service. Such as fluentd. This again allows the namespace to configure its own log shipper to push to whatever log backend they want to support.

once multi tenancy becomes more of a thing, it will be harder to solve properly.

Each namespace may be a different tenant so logs from each should not necessarily be aggregated, but allowed to be sent to tenant specified locations.

I can think of a few ways of doing this:

  1. make a new volume type, container-logs. This allows a daemonset launched by a particular namespace to access just the logs from its own containers. They can then send the logs with whatever log shipper of choice to whichever storage daemon of choice.
  2. Modify one of(or more) of the log shippers, such as fluentd-bit to read the namespace the pod is in, and redirect logs from each pod to a further log shipper running in that namespace as a service. Such as fluentd. This again allows the namespace to configure its own log shipper to push to whatever log backend they want to support.
@crassirostris

This comment has been minimized.

Show comment
Hide comment
@crassirostris

crassirostris May 17, 2017

Member

@caarlos0 @kfox1111 I agree with your points. This is a complex topic, as it requires coordination of instrumentation, storage, node and maybe even more teams. I suggest having a proposal for the overall logging architecture laid out first and then discuss the changed to this consistent view. I expect for this proposal to appear in a month or so, bringing order and figuring out all problems mentioned.

Member

crassirostris commented May 17, 2017

@caarlos0 @kfox1111 I agree with your points. This is a complex topic, as it requires coordination of instrumentation, storage, node and maybe even more teams. I suggest having a proposal for the overall logging architecture laid out first and then discuss the changed to this consistent view. I expect for this proposal to appear in a month or so, bringing order and figuring out all problems mentioned.

@caarlos0

This comment has been minimized.

Show comment
Hide comment
@caarlos0

caarlos0 May 17, 2017

Contributor

@crassirostris I'm not sure I understand: if we just allow log-driver et al, we don't have to deal with storage or any of that, right?

Is just docker sendings its STDOUT to whatever log driver is set up in a container basis, right? We kind of pass the responsibility down to the container... seems like a pretty simple solution to me - but, as I said, I don't know the codebase, so maybe I'm just plain wrong...

Contributor

caarlos0 commented May 17, 2017

@crassirostris I'm not sure I understand: if we just allow log-driver et al, we don't have to deal with storage or any of that, right?

Is just docker sendings its STDOUT to whatever log driver is set up in a container basis, right? We kind of pass the responsibility down to the container... seems like a pretty simple solution to me - but, as I said, I don't know the codebase, so maybe I'm just plain wrong...

@kfox1111

This comment has been minimized.

Show comment
Hide comment
@kfox1111

kfox1111 May 17, 2017

The issue is the log-driver in docker doesn't add any of the k8s metadata that makes consuming the logs later actually useful. :/

The issue is the log-driver in docker doesn't add any of the k8s metadata that makes consuming the logs later actually useful. :/

@caarlos0

This comment has been minimized.

Show comment
Hide comment
@caarlos0

caarlos0 May 17, 2017

Contributor

@kfox1111 hmm, makes sense...

but, what if the user only wants the "application" logs, not kubernetes logs, not docker logs, just the app running inside the container logs?

In that case, seems to me, log-driver would work...

Contributor

caarlos0 commented May 17, 2017

@kfox1111 hmm, makes sense...

but, what if the user only wants the "application" logs, not kubernetes logs, not docker logs, just the app running inside the container logs?

In that case, seems to me, log-driver would work...

@crassirostris

This comment has been minimized.

Show comment
Hide comment
@crassirostris

crassirostris May 17, 2017

Member

@caarlos0 It may have some implications, e.g. kubelet makes some assumptions about logging format to server kubectl logs.

But all things aside, log-driver per se is Docker-specific and might not work for other runtimes, that's the main reason not to include it in the API.

Member

crassirostris commented May 17, 2017

@caarlos0 It may have some implications, e.g. kubelet makes some assumptions about logging format to server kubectl logs.

But all things aside, log-driver per se is Docker-specific and might not work for other runtimes, that's the main reason not to include it in the API.

@caarlos0

This comment has been minimized.

Show comment
Hide comment
@caarlos0

caarlos0 May 17, 2017

Contributor

@crassirostris that makes sense...

since this feature will not be added (as described in the issue), maybe this issue should be closed (or edited or whatever)?

Contributor

caarlos0 commented May 17, 2017

@crassirostris that makes sense...

since this feature will not be added (as described in the issue), maybe this issue should be closed (or edited or whatever)?

@crassirostris

This comment has been minimized.

Show comment
Hide comment
@crassirostris

crassirostris May 17, 2017

Member

@caarlos0 However, we definitely want to make the logging set up more flexible and transparent. Your feedback will be appreciated on the proposal!

Member

crassirostris commented May 17, 2017

@caarlos0 However, we definitely want to make the logging set up more flexible and transparent. Your feedback will be appreciated on the proposal!

@whereisaaron

This comment has been minimized.

Show comment
Hide comment
@whereisaaron

whereisaaron May 17, 2017

stdout logging from containers is currently handled out-of-band within Kubernetes. We are currently rely on non-Kubernetes solutions to handle logging, or privileged containers that jail-break Kubernetes to get access to the out-of-band logging. Container run-time logging is different per run-time (docker, rkt, Windows), so picking any one, like Docker --log-driver is creating future baggage.

I suggest we need the kubelet to bring log streams back in-band. Define or pick a minimal JSON or XML log format, that collects stdout lines from each container, add a minimal cluster+namespace+pod+container metadata, so the log source is identified within Kubernetes space, and direct the stream to a Kubernetes Service+Port. Users are free to provide whatever log consumption Service they like. Maybe Kubernetes will provide one reference/default Service that implements the 'kubectl logs' support.

Without a logging consumption Service specified, logs will be discarded and not hit disk at all. Streaming the logs elsewhere, or writing to persistent storage and rotating, all that is the responsibility/decision of the Service.

The kubelet container runtime wrapper does the minimum to extract the stdout from each container runtime, and bring it back in-band for k8s self-hosted Service(s) to consume and process.

The container spec in the Deployment or Pod would optionally specify the target Service and Port for stdout logging. Adding k8s metadata for cluster+namespace+pod+container would be optional (so the choice of raw/untouched or with metadata). Users would be free to aggregate all logs to one place, or aggregate by tenant, or namespace, or application.

The nearest to this now is to run a Service that uses 'kubectl logs -f' to stream container logs for each container via the API server. That's doesn't sound very efficient or scalable. This proposal would allow more efficient direct steaming from container runtime wrapper direct to Service or Pod, with optimizations like preferring logging Deployment or Daemonset Pods on the same node and the container generating the logs.

I am proposing Kubernetes should do the minimum to efficiently bring container run-time logs in-band, for any self-hosted, homogeneous or heterogeneous logging solutions we with to create within Kubernetes space.

What do people think?

whereisaaron commented May 17, 2017

stdout logging from containers is currently handled out-of-band within Kubernetes. We are currently rely on non-Kubernetes solutions to handle logging, or privileged containers that jail-break Kubernetes to get access to the out-of-band logging. Container run-time logging is different per run-time (docker, rkt, Windows), so picking any one, like Docker --log-driver is creating future baggage.

I suggest we need the kubelet to bring log streams back in-band. Define or pick a minimal JSON or XML log format, that collects stdout lines from each container, add a minimal cluster+namespace+pod+container metadata, so the log source is identified within Kubernetes space, and direct the stream to a Kubernetes Service+Port. Users are free to provide whatever log consumption Service they like. Maybe Kubernetes will provide one reference/default Service that implements the 'kubectl logs' support.

Without a logging consumption Service specified, logs will be discarded and not hit disk at all. Streaming the logs elsewhere, or writing to persistent storage and rotating, all that is the responsibility/decision of the Service.

The kubelet container runtime wrapper does the minimum to extract the stdout from each container runtime, and bring it back in-band for k8s self-hosted Service(s) to consume and process.

The container spec in the Deployment or Pod would optionally specify the target Service and Port for stdout logging. Adding k8s metadata for cluster+namespace+pod+container would be optional (so the choice of raw/untouched or with metadata). Users would be free to aggregate all logs to one place, or aggregate by tenant, or namespace, or application.

The nearest to this now is to run a Service that uses 'kubectl logs -f' to stream container logs for each container via the API server. That's doesn't sound very efficient or scalable. This proposal would allow more efficient direct steaming from container runtime wrapper direct to Service or Pod, with optimizations like preferring logging Deployment or Daemonset Pods on the same node and the container generating the logs.

I am proposing Kubernetes should do the minimum to efficiently bring container run-time logs in-band, for any self-hosted, homogeneous or heterogeneous logging solutions we with to create within Kubernetes space.

What do people think?

@crassirostris

This comment has been minimized.

Show comment
Hide comment
@crassirostris

crassirostris May 18, 2017

Member

@whereisaaron I really would like not to have this discussion now, when we don't have all details around the logging ecosystem in one place.

E.g. I see network and machine problems disrupting the log flow, but again, I don't want to discuss it just yet. How about we discuss this later, when proposal is ready? Does it seem reasonable for you?

Member

crassirostris commented May 18, 2017

@whereisaaron I really would like not to have this discussion now, when we don't have all details around the logging ecosystem in one place.

E.g. I see network and machine problems disrupting the log flow, but again, I don't want to discuss it just yet. How about we discuss this later, when proposal is ready? Does it seem reasonable for you?

@whereisaaron

This comment has been minimized.

Show comment
Hide comment
@whereisaaron

whereisaaron May 20, 2017

Certainly @crassirostris. Please let us know here when the proposal is ready to check out.

Certainly @crassirostris. Please let us know here when the proposal is ready to check out.

@kargakis

This comment has been minimized.

Show comment
Hide comment
@kargakis

kargakis Jun 10, 2017

Member

/sig scalability

Member

kargakis commented Jun 10, 2017

/sig scalability

@vhosakot

This comment has been minimized.

Show comment
Hide comment
@vhosakot

vhosakot Dec 13, 2017

Although both --log-driverand --log-opt are options for the Docker daemon and not k8s features, it would be nice to specify them in the k8s pod spec for:

  1. per-pod log driver and not a single node-level log driver
  2. different types of app-specific log drivers (fluentd, syslog, journald, splunk) on the same node
  3. set--log-opt to configure log rotation for a pod
  4. per-pod --log-opt settings and not a single node-level --log-opt

AFAIK, none of the above can be set at the pod-level in the k8s pod spec today.

Although both --log-driverand --log-opt are options for the Docker daemon and not k8s features, it would be nice to specify them in the k8s pod spec for:

  1. per-pod log driver and not a single node-level log driver
  2. different types of app-specific log drivers (fluentd, syslog, journald, splunk) on the same node
  3. set--log-opt to configure log rotation for a pod
  4. per-pod --log-opt settings and not a single node-level --log-opt

AFAIK, none of the above can be set at the pod-level in the k8s pod spec today.

@crassirostris

This comment has been minimized.

Show comment
Hide comment
@crassirostris

crassirostris Dec 13, 2017

Member

@vhosakot none of the above can be set at any level in Kubernetes, because those are not Kubernetes concepts

Member

crassirostris commented Dec 13, 2017

@vhosakot none of the above can be set at any level in Kubernetes, because those are not Kubernetes concepts

@vhosakot

This comment has been minimized.

Show comment
Hide comment
@vhosakot

vhosakot Dec 13, 2017

@crassirostris exactly! :)

If k8s does everything that Docker does at the pod-level/container-level, won't it be easy for users? Why make users use Docker at all for few pod-level/container-level stuff?

And, a k8s lover not a Docker fan may ask the same question.

@crassirostris exactly! :)

If k8s does everything that Docker does at the pod-level/container-level, won't it be easy for users? Why make users use Docker at all for few pod-level/container-level stuff?

And, a k8s lover not a Docker fan may ask the same question.

@crassirostris

This comment has been minimized.

Show comment
Hide comment
@crassirostris

crassirostris Dec 13, 2017

Member

@vhosakot Point is, there's a number of other container runtimes that can be used with K8s, but --log-opt exists only in Docker. Creating such option on the K8s level would be intentionally leaking the abstraction. I don't think this is the way we want to go. If an option exists, it should be supported by all container runtimes, ideally be a part of CRI

I'm not saying that there won't be such option, I'm saying it won't be a direct route to Docker

Member

crassirostris commented Dec 13, 2017

@vhosakot Point is, there's a number of other container runtimes that can be used with K8s, but --log-opt exists only in Docker. Creating such option on the K8s level would be intentionally leaking the abstraction. I don't think this is the way we want to go. If an option exists, it should be supported by all container runtimes, ideally be a part of CRI

I'm not saying that there won't be such option, I'm saying it won't be a direct route to Docker

@vhosakot

This comment has been minimized.

Show comment
Hide comment
@vhosakot

vhosakot Dec 13, 2017

@crassirostris True, sounds like it comes down to if k8s should do what CRI does/allows at the pod-level/container-level, not Docker-specific.

@crassirostris True, sounds like it comes down to if k8s should do what CRI does/allows at the pod-level/container-level, not Docker-specific.

@crassirostris

This comment has been minimized.

Show comment
Hide comment
@crassirostris

crassirostris Dec 13, 2017

Member

Yup, absolutely correct

Member

crassirostris commented Dec 13, 2017

Yup, absolutely correct

@gabriel-tincu

This comment has been minimized.

Show comment
Hide comment
@gabriel-tincu

gabriel-tincu Dec 19, 2017

Although i'm late to this discussion and i have an interest in seeing this feature implemented, I would argue that there is a trade-off between having a pretty design and having a straight-forward way of setting up a sane and uniform logging solution for the cluster. Yes, having this feature implemented would expose docker internal , which is a big no no , but at the same time i could bet good money that the majority of K8S users use docker as the underlying container tech and docker does come with a very comprehensive list of log drivers.

Although i'm late to this discussion and i have an interest in seeing this feature implemented, I would argue that there is a trade-off between having a pretty design and having a straight-forward way of setting up a sane and uniform logging solution for the cluster. Yes, having this feature implemented would expose docker internal , which is a big no no , but at the same time i could bet good money that the majority of K8S users use docker as the underlying container tech and docker does come with a very comprehensive list of log drivers.

@crassirostris

This comment has been minimized.

Show comment
Hide comment
@crassirostris

crassirostris Dec 19, 2017

Member

@gabriel-tincu I'm currently not convinced that the original FR is worth the trouble

docker does come with a very comprehensive list of log drivers

You can set up logging on Docker level during the K8s deployment step and use any of these log drivers, without leaking this information to K8s. The only thing you cannot do today is set up those options per-container/per-pod (actually, you can have a setup with dedicated nodes & use node selector), but I'm not sure it's a big limitation.

Member

crassirostris commented Dec 19, 2017

@gabriel-tincu I'm currently not convinced that the original FR is worth the trouble

docker does come with a very comprehensive list of log drivers

You can set up logging on Docker level during the K8s deployment step and use any of these log drivers, without leaking this information to K8s. The only thing you cannot do today is set up those options per-container/per-pod (actually, you can have a setup with dedicated nodes & use node selector), but I'm not sure it's a big limitation.

@gabriel-tincu

This comment has been minimized.

Show comment
Hide comment
@gabriel-tincu

gabriel-tincu Jan 12, 2018

@crassirostris I agree that you can set that up before setting up the environment, but if there's a way to actively update the docker log driver after the environment is already setup then it eludes me at the moment

@crassirostris I agree that you can set that up before setting up the environment, but if there's a way to actively update the docker log driver after the environment is already setup then it eludes me at the moment

@whereisaaron

This comment has been minimized.

Show comment
Hide comment
@whereisaaron

whereisaaron Jan 12, 2018

@gabriel-tincu @vhosakot the direct interface that used to existing between k8s and Docker back in the 'olden day' of >=1.5 is deprecated and I believe the code totally removed now. Everything between the kubelet and the run-times like Docker (or the others like rkt, cri-o, runc, lxd) goes through CRI. There are lots of container runtimes now and Docker itself is likely to be deprecated and removed soon in favor of cri-containerd+containerd.

http://blog.kubernetes.io/2017/11/containerd-container-runtime-options-kubernetes.html

image

@crassirostris any movement on a proposal, that might have the possibility of in-band container logging?

@gabriel-tincu @vhosakot the direct interface that used to existing between k8s and Docker back in the 'olden day' of >=1.5 is deprecated and I believe the code totally removed now. Everything between the kubelet and the run-times like Docker (or the others like rkt, cri-o, runc, lxd) goes through CRI. There are lots of container runtimes now and Docker itself is likely to be deprecated and removed soon in favor of cri-containerd+containerd.

http://blog.kubernetes.io/2017/11/containerd-container-runtime-options-kubernetes.html

image

@crassirostris any movement on a proposal, that might have the possibility of in-band container logging?

@luckyfengyong luckyfengyong referenced this issue in kubernetes/ingress-nginx Jan 24, 2018

Closed

Enable access_log rotate for ingress #1964

@Random-Liu

This comment has been minimized.

Show comment
Hide comment
@Random-Liu

Random-Liu Feb 22, 2018

Member

CRI container log is file based (https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/kubelet-cri-logging.md), and the log path is explicitly defined:

/var/log/pods/PodUID/ContainerName/RestartCount.log

In most of the docker logging drivers https://docs.docker.com/config/containers/logging/configure/#supported-logging-drivers, I think for cluster environment, the most important ones are the drivers ingesting container log into cluster logging management system, such as splunk, awslogs, gcplogs etc.

In the case of CRI, no "docker log driver" should be used. People can run a daemonset to ingest container logs from the CRI container log directory into wherever they want. They can use fluentd or even write a daemonset by themselves.

If more metadata is needed, we can think about dropping a metadata file, extend the file path or let the daemonset get metadata from apiserver. There is ongoing discussion about this #58638

Member

Random-Liu commented Feb 22, 2018

CRI container log is file based (https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/kubelet-cri-logging.md), and the log path is explicitly defined:

/var/log/pods/PodUID/ContainerName/RestartCount.log

In most of the docker logging drivers https://docs.docker.com/config/containers/logging/configure/#supported-logging-drivers, I think for cluster environment, the most important ones are the drivers ingesting container log into cluster logging management system, such as splunk, awslogs, gcplogs etc.

In the case of CRI, no "docker log driver" should be used. People can run a daemonset to ingest container logs from the CRI container log directory into wherever they want. They can use fluentd or even write a daemonset by themselves.

If more metadata is needed, we can think about dropping a metadata file, extend the file path or let the daemonset get metadata from apiserver. There is ongoing discussion about this #58638

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot May 23, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Jun 22, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@iavael

This comment has been minimized.

Show comment
Hide comment
@iavael

iavael Jun 25, 2018

/remove-lifecycle rotten

iavael commented Jun 25, 2018

/remove-lifecycle rotten

@bryan831

This comment has been minimized.

Show comment
Hide comment
@bryan831

bryan831 Jul 4, 2018

any updates on this? so how has anyone running k8s with Docker containers settled logging to some backend like AWS CloudWatch?

bryan831 commented Jul 4, 2018

any updates on this? so how has anyone running k8s with Docker containers settled logging to some backend like AWS CloudWatch?

@whereisaaron

This comment has been minimized.

Show comment
Hide comment
@whereisaaron

whereisaaron Jul 4, 2018

@bryan831 it is popular to collect the k8s container log files using fluentd or similar and aggregate them into your choice of back-end, CloudWatch, StackDriver, Elastisearch etc.

There are off-the-shelf Helm charts for e.g. fluentd+CloudWatch, fluentd+Elastisearch, fluent-bit->fluentd->your choice, Datadog and probably other combinations if you poke around.

@bryan831 it is popular to collect the k8s container log files using fluentd or similar and aggregate them into your choice of back-end, CloudWatch, StackDriver, Elastisearch etc.

There are off-the-shelf Helm charts for e.g. fluentd+CloudWatch, fluentd+Elastisearch, fluent-bit->fluentd->your choice, Datadog and probably other combinations if you poke around.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment