Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 0 additions & 12 deletions docs/user-guide/logging/counter-pod.yaml

This file was deleted.

10 changes: 10 additions & 0 deletions docs/user-guide/logging/examples/counter-pod.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
25 changes: 25 additions & 0 deletions docs/user-guide/logging/examples/fluentd-sidecar-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
apiVersion: v1
data:
fluentd.conf: |
<source>
type tail
format none
path /var/log/1.log
pos_file /var/log/1.log.pos
tag count.format1
</source>

<source>
type tail
format none
path /var/log/2.log
pos_file /var/log/2.log.pos
tag count.format2
</source>

<match **>
type google_cloud
</match>
kind: ConfigMap
metadata:
name: fluentd-config
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-agent
image: gcr.io/google_containers/fluentd-gcp:1.30
env:
- name: FLUENTD_ARGS
value: -c /etc/fluentd-config/fluentd.conf
volumeMounts:
- name: varlog
mountPath: /var/log
- name: config-volume
mountPath: /etc/fluentd-config
volumes:
- name: varlog
emptyDir: {}
- name: config-volume
configMap:
name: fluentd-config
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log']
volumeMounts:
- name: varlog
mountPath: /var/log
- name: count-log-2
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/2.log']
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}
26 changes: 26 additions & 0 deletions docs/user-guide/logging/examples/two-files-counter-pod.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}
117 changes: 104 additions & 13 deletions docs/user-guide/logging/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,9 @@ The guidance for cluster-level logging assumes that a logging back-end is presen

## Basic logging in Kubernetes

In this section, you can see an example of basic logging in Kubernetes that outputs data to the standard output stream. This demonstration uses a [pod specification](/docs/user-guide/logging/counter-pod.yaml) with a container that writes some text to standard output once per second.
In this section, you can see an example of basic logging in Kubernetes that outputs data to the standard output stream. This demonstration uses a [pod specification](/docs/user-guide/logging/examples/counter-pod.yaml) with a container that writes some text to standard output once per second.

{% include code.html language="yaml" file="counter-pod.yaml" %}
{% include code.html language="yaml" file="examples/counter-pod.yaml" %}

To run this pod, use the following command:

Expand All @@ -34,12 +34,9 @@ To fetch the logs, use the `kubectl logs` command, as follows

```shell
$ kubectl logs counter
0: Tue Jun 2 21:37:31 UTC 2015
1: Tue Jun 2 21:37:32 UTC 2015
2: Tue Jun 2 21:37:33 UTC 2015
3: Tue Jun 2 21:37:34 UTC 2015
4: Tue Jun 2 21:37:35 UTC 2015
5: Tue Jun 2 21:37:36 UTC 2015
0: Mon Jan 1 00:00:00 UTC 2001
1: Mon Jan 1 00:00:01 UTC 2001
2: Mon Jan 1 00:00:02 UTC 2001
...
```

Expand Down Expand Up @@ -91,17 +88,111 @@ Kubernetes doesn't specify a logging agent, but two optional logging agents are

### Using a sidecar container with the logging agent

![Using a sidecar container with the logging agent](/images/docs/user-guide/logging/logging-with-sidecar.png)
You can use a sidecar container in one of the following ways:

You can implement cluster-level logging by including a dedicated logging agent _for each application_ on your cluster. You can include this logging agent as a "sidecar" container in the pod spec for each application; the sidecar container should contain only the logging agent.
* Sidecar container streams application logs to it's own `stdout`.
* Sidecar container contains a logging agent, which is configured to pick up logs from an application container.

The concrete implementation of the logging agent, the interface between agent and the application, and the interface between the logging agent and the logs back-end are completely up to a you. For an example implementation, see the [fluentd sidecar container](https://github.com/kubernetes/contrib/tree/b70447aa59ea14468f4cd349760e45b6a0a9b15d/logging/fluentd-sidecar-gcp) for the Stackdriver logging backend.
#### Streaming sidecar container

**Note:** Using a sidecar container for logging may lead to significant resource consumption.
![Sidecar container with a streaming container](/images/docs/user-guide/logging/logging-with-streaming-sidecar.png)

Using this approach you can re-use per-node agent and kubelet log handling
mechanisms. A separate container should contain simple piece of software that
reads logs from a file, a socker or the journald and prints it to the
`stdout` or `stderr`. This solution also allows to separate several log streams
from different parts of an application, some of which can lack support
for writing to stdout or stderr. Since the logic behind redirecting logs
is minimal, it's hardly a significant overhead. Additionally, since
stdout and stderr are handled by the kubelet, you have the ability
to use tools like `kubectl logs` out of the box.

Consider the following example: there is an application, writing to two files
in different formats, with the following specification:

{% include code.html language="yaml" file="examples/two-files-counter-pod.yaml" %}

It would be a mess to have log entries of different formats in the same log
stream, even if you managed to redirect both components to the stdout of
a container. Instead, let's use the described earlier approach, introducing
two container, tailing a log file from a shared volume and redirecting it
to a standard output.

{% include code.html language="yaml" file="examples/two-files-counter-pod-streaming-sidecar.yaml" %}

Now, if you run this pod, you can access each log stream separately by
running the following commands:

```shell
$ kubectl logs count-log-1
0: Mon Jan 1 00:00:00 UTC 2001
1: Mon Jan 1 00:00:01 UTC 2001
2: Mon Jan 1 00:00:02 UTC 2001
...
```

```shell
$ kubectl logs count-log-2
Mon Jan 1 00:00:00 UTC 2001 INFO 0
Mon Jan 1 00:00:01 UTC 2001 INFO 1
Mon Jan 1 00:00:02 UTC 2001 INFO 2
...
```

Node-level agent, installed in your cluster will pick those log streams
automatically without any further configuration. Agent then can be
configured to parse log lines depending on the source container.

Streaming sidecar container at the same time can be
responsible for log rotation and retention policy. Imagine you have an old
application that can only write to a single file. Using rsyslog
with some log rotation mechanism can solve the problem of keeping size of
the log file inside a container under a certain limit.

However, remember that it's always recommended to use stdout and sterr
directly and leave rotation and retention policies to kubelet. If you have
an application which writes to a single file, it's generally better to
set `/dev/stdout` as destination rather than implementing streaming sidecar
container approach.

#### Sidecar container with a logging agent

![Sidecar container with a logging agent](/images/docs/user-guide/logging/logging-with-sidecar-agent.png)

If node-level agent is not flexible enought, there is an option to create
a side-container with a separate logging agent, which you configured
specifically to run with the given application.

**Note**, however, that using a logging agent in a sidecar container may lead
to significant resource consumption. Moreover, you won't be able to access
those logs using `kubectl logs` command, since they are not controlled
by the kubelet.

As an example, let's use [Stackdriver logging agent](/docs/user-guide/logging/stackdriver/)
with the application you saw recently:

{% include code.html language="yaml" file="examples/two-files-counter-pod.yaml" %}

Apart from adding logging agent, it should be configured. One good way to
configure container is using [Config Maps](/docs/user-guide/configmap/).
Let's create a config map with the agent configuration. Explaining the
configuration of fluentd, used as a Stackdriver logging agent is not
the purpose of this article, you can learn more in
[the official fluentd documentation](http://docs.fluentd.org/).

{% include code.html language="yaml" file="examples/fluentd-sidecar-config.yaml" %}

Now you can add a sidecar container to the original pod, mounting this
configuration to a place from where fluentd picks it up.

{% include code.html language="yaml" file="examples/two-files-counter-pod-agent-sidecar.yaml" %}

Remember, that this is just an example and you can actually replace fluentd
with any logging agent, reading from any source inside an application
container.

### Exposing logs directly from the application

![Exposing logs directly from the application](/images/docs/user-guide/logging/logging-from-application.png)

You can implement cluster-level logging by exposing or pushing logs directly from every application itself; however, the implementation for such a logging mechanism is outside the scope of Kubernetes.

8 changes: 4 additions & 4 deletions docs/user-guide/logging/stackdriver.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,14 +31,14 @@ Here is the same information in a picture which shows how the pods might be plac
This diagram shows four nodes created on a Google Compute Engine cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pod's execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Stackdriver Logging. A pod which provides the
[cluster DNS service](/docs/admin/dns) runs on one of the nodes and a pod which provides monitoring support runs on another node.

To help explain how cluster-level logging works, consider the following synthetic log generator pod specification [counter-pod.yaml](/docs/user-guide/logging/counter-pod.yaml):
To help explain how cluster-level logging works, consider the following synthetic log generator pod specification [counter-pod.yaml](/docs/user-guide/logging/examples/counter-pod.yaml):

{% include code.html language="yaml" file="counter-pod.yaml" %}
{% include code.html language="yaml" file="examples/counter-pod.yaml" %}

This pod specification has one container which runs a bash script when the container is born. This script simply writes out the value of a counter and the date once per second and runs indefinitely. Let's create the pod in the default namespace.

```shell
$ kubectl create -f examples/blog-logging/counter-pod.yaml
$ kubectl create -f counter-pod.yaml
pods/counter
```

Expand Down Expand Up @@ -93,7 +93,7 @@ pods/counter
Now let's restart the counter.

```shell
$ kubectl create -f examples/blog-logging/counter-pod.yaml
$ kubectl create -f counter-pod.yaml
pods/counter
```

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.