New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Defining log-driver and log-opt when specifying pod in RC and Pod #15478
Comments
/cc @kubernetes/rh-cluster-infra |
Hmm, I think we'll probably want to be able to set this cluster-wide as a default, and then maybe allow specific pod definitions to override. cc @sosiouxme @smarterclayton @liggitt @jwhonce @jcantrill @bparees @jwforres |
Can you describe how you would leverage this on a per container basis (use case)? We traditionally do not expose Docker specific options directly in containers unless they can be cleanly abstracted across runtimes. Knowing how you would want to use this will help justify it. |
Note that docker logs still only support json-file and journald drivers, though I imagine that list could expand. Perhaps what users would actually want is a selection of defined log writing endpoints, not exposure to the logging driver details. |
@ncdc @smarterclayton I agree with both of you, after reconsidering our use case in internal, it turns out that
|
The changes to the salt templates to make this a default should not be On Tue, Oct 13, 2015 at 10:55 AM, Epo Jemba notifications@github.com
|
👍 Note that there are now 9 logging drivers. What's the consensus on getting this one in? |
+1 |
In case anyone isn't aware, you can define the default log driver on a per-node basis with a flag to the Docker daemon ( |
Most clustering will not want their logs going "out-of-band", so what is the feature enablement that this would provide. Also, from an ops perspective it looks like a loss of control. Currently we set the defaults and configure a logging stack to aggregate. |
+1 on this. @timothysc here our use-case. We have a complex dynamic infrastructure (~100 machines) with a lot of existing services running on them, with our own K8S is extremely opinionated on how you gather logs. This might be great for whoever is starting from scratch on a simple infrastructure. For everyone else working on complex infrastructures which would not mind to dive deep and to implement a custom logging mechanism, there is simply not way to do it at the moment, which is quite frustrating. Hopefully, it makes sense. |
So in your scenario logs are truly "per application", but you have to On Mon, May 23, 2016 at 10:50 AM, Jacopo Nardiello <notifications@github.com
|
@smarterclayton I do understand about your concerns and they are well placed. I'm not sure if the whole cluster has to be aware of the existence of pod-level logging, what I think we should do is giving the option to log pod stdout/stderr somewhere (a file based on their current pod name?) so that anyone willing to implement their custom solution, would have a persisted place where to get the content. This opens up a HUGE chapter though as logrotation is not trivial. These are just my two cents, but we can't pretend that real-world complex scenarios just give up their existing logging infrastructure. |
Are you specifying custom log options per application? How many different Yet another option would be that we expose a way to let deployers have the Finally, we could expose key value pairs on the container interface that On Thu, May 26, 2016 at 5:34 AM, Jacopo Nardiello notifications@github.com
|
@smarterclayton have you seen #24677 (comment) |
No, thanks. Moving discussion there. On Thu, May 26, 2016 at 11:23 AM, Andy Goldstein notifications@github.com
|
Hi there, I would say that logging to disk is an anti-pattern. Logs are inherently "state", and should preferably not be saved to disk. Shipping the logs directly from a container to a repository solves many problems. Setting the log driver would mean that the kubectl logs command can not show anything anymore. Docker already has log drivers for google cloud (gcplogs) and Amazon (awslogs). While it is possible to set them on the Docker daemon itself, that has many drawbacks. By being able to set the two docker options: --log-driver= Logging driver for container It would be possible to send along labels (for gcplogs) or awslogs-group (for awslogs) I have been reading up on how people are handling logs in kubernetes. Many seem to set up some elaborate scrapers that forward the logs to central systems. Being able to set the log driver will make that unnecessary - freeing up time to work on more interesting things :) |
I can also add that some people, including me, want to perform docker logs rotation via '--log-opt max-size' option on JSON logging driver (which is native to docker) instead of setting up logrotate on the host. So, even exposing just the '--log-opt' option would be greatly appreciated |
I have modified the k8s, when creating container configuration LogConfig. |
+1 |
Use case for per container configuration: I want to log elsewhere or differently for containers I deploy and I don't care about (or want to change) the log driver for the standard containers necessary to run Kubernetes. There you go. Please make this happen. |
Another idea is, where all the containers still forward logs into the same endpoint, but you can at least set different fields values for different docker containers on your log server. This would work for the gelf docker driver, if we could ensure docker containers created by Kubernetes are custom labelled. Meaning: some of a Pod fields could be forwarded as docker container labels. (Maybe this is already possible but I don't know how to achieve that). Example without Kubernetes, only with docker daemon and gelf driver. Have docker daemon configured with:
and another docker container:
This way, on Graylog, you can differenciate between Currently I use such docker daemon options:
|
@yingsimai: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@ejemba: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This probably needs to be re-opened :) |
/reopen |
@bithavoc: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@MoSunDay: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@ejemba: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen So, it seems that Kubernetes is shit compared to Docker swarm. |
@Janevski: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is that any update? |
Can we just tell the bot to leave this issue alone? cc @ejemba |
/reopen @ushuz I don't know if it is possible to do this. I don't know if I have any right on it. |
@ejemba: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@ejemba In that case, maybe you can remove the |
/remove-lifecycle rotten |
This issue is no longer relevant since we have CRI and not just docker. If we want to consider log params in CRI, we should open a new issue. |
We need to be able to define the following options when specifying the pod definition in RC and Pod
--log-driver= Logging driver for container
--log-opt=[] Log driver options
These options should be settable at container level and have been introduced with Docker 1.8.
Since docker client lib support both options as well adding those options to the pod definition is now doable.
The text was updated successfully, but these errors were encountered: