Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve logging diagram and description #6486

Merged
merged 5 commits into from Dec 4, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
9 changes: 6 additions & 3 deletions docs/logging/01-01-logging.md
Expand Up @@ -2,9 +2,12 @@
title: Overview
---

Logging in Kyma uses [Loki](https://github.com/grafana/loki) which is a Prometheus-like log management system. This lightweight solution, integrated with Grafana, is easy to understand and operate. Loki provides Promtail which is a log router for Docker containers. Promtail runs inside Docker, checks each container and routes the logs to the log management system.
Logging in Kyma uses [Loki](https://github.com/grafana/loki) which is a Prometheus-like log management system. This lightweight solution, integrated with Grafana, is easy to understand and operate. The main elements of the logging stack include:
* The Agent acting as a log router for Docker containers. It runs inside Docker, checks each container, and routes the logs to the log management system. Currently, Kyma supports [Promtail](https://github.com/grafana/loki/tree/master/docs/clients/promtail) and [FluentBit](https://fluentbit.io/) log collectors. For details on log collector configuration, see [this](/components/logging/#tutorials-configure-the-log-collector) tutorial.
* Loki main server which stores logs and processes queries.
* [Grafana](https://grafana.com/) logging and metrics platform used for quering and displaying logs.


> **NOTE:** At the moment, Kyma provides an **alpha** version of the Logging component. The default Loki Pod log tailing configuration does not work with Kubernetes version 1.14 (for GKE version 1.12.6-gke.X) and above. For setup and preparation of deployment see the [cluster installation](/root/kyma/#installation-install-kyma-on-a-cluster) guide.
>**NOTE:** At the moment, Kyma provides an **alpha** version of the Logging component. The default Loki Pod log tailing configuration does not work with Kubernetes version 1.14 (for GKE version 1.12.6-gke.X) and above. For setup and preparation of deployment see the [cluster installation](/root/kyma/#installation-install-kyma-on-a-cluster) guide.

> **CAUTION:** Loki is designed for application logging. Do not log any sensitive information, such as passwords or credit card numbers.
>**CAUTION:** Loki is designed for application logging. Do not log any sensitive information, such as passwords or credit card numbers.
26 changes: 12 additions & 14 deletions docs/logging/02-01-logging.md
Expand Up @@ -4,22 +4,20 @@ title: Architecture

This document provides an overview of the logging architecture in Kyma.

![Logging architecture in Kyma](./assets/loki-overview.png)
![Logging architecture in Kyma](./assets/logging-architecture.svg)

1. Container logs are stored under the `var/log` directory and its subdirectories.
2. The agent queries the [Kubernetes API Server](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/) which validates and configures data for objects such as Pods or Services.
3. The agent fetches Pod and container details. Based on that, it tails the logs.
4. The Agent enriches log data with Pod labels and sends them to the Loki server. To enable faster data processing, log data is organized in log chunks. A log chunk consists of metadata, such as labels, collected over a certain time period.
5. The Loki server processes the log data and stores it in the log store. The labels are stored in the index store.
6. The user queries the logs using the following tools:

* Grafana dashboards to analyze and visualize logs.
* API clients to query log data using the [HTTP API](https://github.com/grafana/loki/blob/master/docs/api.md) for Loki.
* Log UI, accessed from the Kyma Console, to display and analyze logs.

## Agent (Promtail)
Promtail is the agent responsible for collecting reliable metadata, consistent with the time series or metrics metadata. To achieve this, the agent uses the same service discovery and relabelling libraries as Prometheus. Promtail is used as a Deamon Set to discover targets, create metadata labels, and tail log files to produce a stream of logs. The logs are buffered on the client side and then sent to the service.

## Log chunks
A log chunk consists of all logs for metadata, such as labels, collected over a certain time period. Log chunks support append, seek, and stream operations on requests.

## Life of a write request
The write request path resembles [Cortex](https://github.com/cortexproject/cortex) architecture, using the same server-side components. It looks as follows:
1. The write request reaches the distributor service, which is responsible for distributing and replicating the requests to ingesters. Loki uses the Cortex consistent hash ring and distributes requests based on the hash of the entire metadata set.
2. The write request goes to the log ingester which batches the requests for the same stream into the log chunks stored in memory. When the log chunks reach a predefined size or age, they are flushed out to the Cortex chunk store.
3. The Cortex chunk store will be updated to reduce copying of chunk data on the read and write path and add support for writing chunks of google cloud storage.

## Life of a query request
Log chunks are larger than Prometheus Cortex chunks (Cortex chunks do not exceed 1KB). As a result, you cannot load and decompress them as a whole.
To solve this problem Loki supports streaming and iterating over the chunks. This means it can decompress only the necessary chunk parts.

For further information, see the [design documentation](https://docs.google.com/document/d/11tjK_lvp1-SVsFZjgOTr1vV3-q6vBAsZYIQ5ZeYBkyM/view).
6 changes: 3 additions & 3 deletions docs/logging/03-01-access-logs.md
Expand Up @@ -5,19 +5,19 @@ type: Details

To access the logs, follow these steps:

1. Run the following command to get the current Pod name:
1. Run the following command to get the Pod name:

```bash
kubectl get pods -l app=loki -n kyma-system
```

2. Run the following command to configure port forwarding, replace <pod_name> with output of previous command:
2. Run the following command to configure port forwarding, replace **{pod_name}** with output of the previous command:

```bash
kubectl port-forward -n kyma-system <pod_name> 3100:3100
```

3. To get first 1000 lines of error logs for components in the 'kyma-system' Namespace, run the following command:
3. To get first 1000 lines of error logs for components in the `kyma-system` Namespace, run the following command:

```bash
curl -X GET -G 'http://localhost:3100/api/prom/query' --data-urlencode 'query={namespace="kyma-system"}' --data-urlencode 'limit=1000' --data-urlencode 'regexp=error'
Expand Down
3 changes: 3 additions & 0 deletions docs/logging/assets/logging-architecture.svg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed docs/logging/assets/loki-overview.png
Binary file not shown.