Skip to content

Commit

Permalink
#50 | release v2.1.0
Browse files Browse the repository at this point in the history
  • Loading branch information
bourbonkk committed Apr 29, 2022
1 parent 933df6a commit e71e2db
Show file tree
Hide file tree
Showing 11 changed files with 374 additions and 53 deletions.
4 changes: 2 additions & 2 deletions content/en/_index.html
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@


{{% blocks/lead color="primary" %}}
The Clymene is a time series data and logs collection platform for distributed systems inspired by [Prometheus](https://prometheus.io) and [Jaeger](https://www.jaegertracing.io).
Time series data from various environments can be collected and stored in different types of databases. It can be configured in a variety of architectures.
The Clymene is time-series data and logs collection platform for distributed systems inspired by [Prometheus](https://prometheus.io) and [Jaeger](https://www.jaegertracing.io). Time-series data and logs from various environments can be collected and stored in
different types of databases. It can be configured in a variety of architectures. Choose the storage that users can use easily and build a monitoring system with dashboards that users can use easily.
{{% /blocks/lead %}}

{{< blocks/section color="dark" >}}
Expand Down
11 changes: 11 additions & 0 deletions content/en/blog/releases/2.1.0.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@

---
title: "2.1.0 (2022-04-30)"
linkTitle: "v2.1.0"
date: 2022-04-30
---
1. gateway, influxdb, opentsdb, tdengine success/failure metric integration completed issue: [#40](https://github.com/Clymene-project/Clymene/issues/40)
2. Added promtail-ingester component that can use kafka to promtail's log transfer backend [#47](https://github.com/Clymene-project/Clymene/issues/47)
3. promtail - log data transfer to gateway(gRPC, HTTP) [#46](https://github.com/Clymene-project/Clymene/issues/46)
4. Development of promtail-gateway component for log data pipeline [#53](https://github.com/Clymene-project/Clymene/issues/53)

35 changes: 20 additions & 15 deletions content/en/docs/Database options/gateway.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,24 +5,29 @@ title: "clymene-gateway Options"
linkTitle: "clymene-gateway Options"
date: 2017-01-18
description: >
The Clymene Gateway is an optional service that can receive metric data from the agent through gRPC, HTTP communication
The Clymene, promtail Gateway is an optional service that can receive timeseries data and logs from the agent through gRPC, HTTP communication
---
### clymene-gateway Options

```
--gateway.grpc.discovery.min-peers int Max number of collectors to which the agent will try to connect at any given time (default 3)
--gateway.grpc.host-port string Comma-separated string representing host:port of a static list of gateways to connect to directly (default "localhost:15610")
--gateway.grpc.retry.max uint Sets the maximum number of retries for a call (default 3)
--gateway.grpc.tls.ca string Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore)
--gateway.grpc.tls.cert string Path to a TLS Certificate file, used to identify this process to the remote server(s)
--gateway.grpc.tls.enabled Enable TLS when talking to the remote server(s)
--gateway.grpc.tls.key string Path to a TLS Private Key file, used to identify this process to the remote server(s)
--gateway.grpc.tls.server-name string Override the TLS server name we expect in the certificate of the remote server(s)
--gateway.grpc.tls.skip-host-verify (insecure) Skip server's certificate chain and host name verification
--gateway.http.max-err-msg-len int Maximum length of error message (default 256)
--gateway.http.timeout duration Time out when doing remote write(sec, default 10 sec) (default 10s)
--gateway.http.url string the clymene-gateway remote write HTTP receiver endpoint(/api/metrics) (default "http://localhost:15611/api/metrics")
--gateway.http.user.agent string User-Agent in request header (default "Clymene/")
--gateway.service-type string gateway service type(grpc or http) (default "grpc")
--gateway.grpc.discovery.min-peers int Max number of collectors to which the agent will try to connect at any given time (default 3)
--gateway.grpc.host-port string Comma-separated string representing host:port of a static list of gateways to connect to directly (default "localhost:15610")
--gateway.grpc.retry.max uint Sets the maximum number of retries for a call (default 3)
--gateway.grpc.tls.ca string Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore)
--gateway.grpc.tls.cert string Path to a TLS Certificate file, used to identify this process to the remote server(s)
--gateway.grpc.tls.enabled Enable TLS when talking to the remote server(s)
--gateway.grpc.tls.key string Path to a TLS Private Key file, used to identify this process to the remote server(s)
--gateway.grpc.tls.server-name string Override the TLS server name we expect in the certificate of the remote server(s)
--gateway.grpc.tls.skip-host-verify (insecure) Skip server's certificate chain and host name verification
--gateway.http.logs.url string the clymene-gateway logs write HTTP receiver endpoint(/api/logs) (default "http://localhost:15611/api/logs")
--gateway.http.max-err-msg-len int Maximum length of error message (default 256)
--gateway.http.timeout duration Time out when doing remote write(sec, default 10 sec) (default 10s)
--gateway.http.url string the clymene-gateway remote write HTTP receiver endpoint(/api/metrics) (default "http://localhost:15611/api/metrics")
--gateway.http.user.agent string User-Agent in request header (default "Clymene/")
--gateway.service-type string gateway service type(grpc or http) (default "grpc")
-h, --help help for Clymene-promtail
--log-level string Minimal allowed log Level. For more levels see https://github.com/uber-go/zap (default "info")
--metrics-backend string Defines which metrics backend to use for metrics reporting: expvar, prometheus, none (default "prometheus")
--metrics-http-route string Defines the route of HTTP endpoint for metrics backends that support scraping (default "/metrics")
```

1 change: 1 addition & 0 deletions content/en/docs/Database options/kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ description: >
--kafka.producer.tls.server-name string Override the TLS server name we expect in the certificate of the remote server(s)
--kafka.producer.tls.skip-host-verify (insecure) Skip server's certificate chain and host name verification
--kafka.producer.topic string The name of the kafka topic (default "clymene")
--kafka.producer.promtail.topic string The name of the promtail kafka topic to consume from (default "clymene-logs")
```

2 changes: 1 addition & 1 deletion content/en/docs/Getting started/gateway.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ tags: ["test", "sample", "docs"]
title: "Clymene Gateway Getting Start"
linkTitle: "Clymene Gateway"
date: 2017-01-05
weight: 4
weight: 3
description: >
The Clymene Gateway is an optional service that can receive metric data from the agent through gRPC, HTTP communication
---
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/Getting started/ingester.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ tags: ["test", "sample", "docs"]
title: "Clymene Ingester Getting Start"
linkTitle: "Clymene Ingester"
date: 2017-01-05
weight: 3
weight: 2
description: >
The Clymene ingester is an optional service responsible for insert time series data loaded on kafka into the database
---
Expand Down
113 changes: 113 additions & 0 deletions content/en/docs/Getting started/promtail-gateway.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
---
categories: ["Examples"]
tags: ["test", "sample", "docs"]
title: "Promtail Gateway Getting Start"
linkTitle: "Promtail Gateway"
date: 2017-01-18
weight: 6
description: >
The Promtail Gateway is an optional service that can receive logs data from the agent through gRPC, HTTP communication.
---
The Promtail Gateway is an optional service that can receive logs data from the agent through gRPC, HTTP communication.

1. gRPC, HTTP Service
2. Logs data insert to Database(ElasticSearch, Loki, ETC) (Optional)

## How to setting gateway
```
--admin.http.host-ports string The host:ports (e.g. 127.0.0.1:15690 or :15690) for the admin server, including health check, /metrics, etc. (default ":15690")
--gateway.grpc-server.host-port string The host:port (e.g. 127.0.0.1:15610 or :15610) of the gateway's GRPC server (default ":15610")
--gateway.grpc.tls.cert string Path to a TLS Certificate file, used to identify this server to clients
--gateway.grpc.tls.client-ca string Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted)
--gateway.grpc.tls.enabled Enable TLS on the server
--gateway.grpc.tls.key string Path to a TLS Private Key file, used to identify this server to clients
--gateway.http-server.host-port string The host:port (e.g. 127.0.0.1:15610 or :15611) of the gateway's HTTP server (default ":15611")
--gateway.http.tls.cert string Path to a TLS Certificate file, used to identify this server to clients
--gateway.http.tls.client-ca string Path to a TLS CA (Certification Authority) file used to verify certificates presented by clients (if unset, all clients are permitted)
--gateway.http.tls.enabled Enable TLS on the server
--gateway.http.tls.key string Path to a TLS Private Key file, used to identify this server to clients
--log-level string Minimal allowed log Level. For more levels see https://github.com/uber-go/zap (default "info")
--metrics-backend string Defines which metrics backend to use for metrics reporting: expvar, prometheus, none (default "prometheus")
--metrics-http-route string Defines the route of HTTP endpoint for metrics backends that support scraping (default "/metrics")
```

## How to set up the Storage Type
#### 1. Setting environmental variables


ElasticSearch
```
STORAGE_TYPE=elasticsearch
```
Loki
```
STORAGE_TYPE=loki
```

Promtail-gateway
```
STORAGE_TYPE=gateway
```

Kafka
```
STORAGE_TYPE=kafka
```

#### 2. Option description by storage type

- [ElasticSearch option](http://clymene-project.github.io/docs/database-options/elasticsearch)
- [Loki option](http://clymene-project.github.io/docs/database-options/loki)
- [Kafka option](http://clymene-project.github.io/docs/database-options/kafka)
- [Promtail-gateway](http://clymene-project.github.io/docs/database-options/gateway)


### Docker-compose Example
```yaml
version: '2'
services:
clymene-ingester:
image: bourbonkk/promtail-gateway:latest
ports:
- "15610:15610"
environment:
- STORAGE_TYPE=elasticsearch
command:
- --log-level=debug
- --es.server-urls=http://[ELASTICSEARCH-IP]:9200
```

### k8s Example
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: promtail-gateway
namespace: clymene
labels:
app: promtail-gateway
spec:
selector:
matchLabels:
app: promtail-gateway
replicas: 1
template:
metadata:
labels:
app: promtail-gateway
spec:
containers:
- name: promtail-gateway
image: bourbonkk/promtail-gateway:latest
imagePullPolicy: Always
ports:
- containerPort: 15610
args:
- --es.server-urls=http://[ELASTICSEARCH-IP]:9200
- --log-level=info
env:
- name: STORAGE_TYPE
value: elasticsearch
securityContext:
runAsUser: 1000
```
135 changes: 135 additions & 0 deletions content/en/docs/Getting started/promtail-ingester.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
---
categories: ["Examples"]
tags: ["test", "sample", "docs"]
title: "Promtail ingester Getting Start"
linkTitle: "Promtail ingester"
date: 2017-01-18
weight: 5
description: >
Promtail ingester is an optional service responsible for insert logs data loaded on kafka into the database
---
Promtail ingester is an optional service responsible for insert logs data loaded on kafka into the database

1. Kafka message consume
2. logs data insert to Database(ElasticSearch, Loki, ETC) (Optional)

## How to setting kafka consumer

```
--admin.http.host-ports string The host:ports (e.g. 127.0.0.1:15694 or :15694) for the admin server, including health check, /metrics, etc. (default ":15694")
--kafka.consumer.authentication string Authentication type used to authenticate with kafka cluster. e.g. none, kerberos, tls, plaintext (default "none")
--kafka.consumer.brokers string The comma-separated list of kafka brokers. i.e. '127.0.0.1:9092,0.0.0:1234' (default "127.0.0.1:9092")
--kafka.consumer.client-id string The Consumer Client ID that clymene-ingester will use (default "clymene")
--kafka.consumer.encoding string The encoding of metrics ("json", "protobuf") consumed from kafka (default "protobuf")
--kafka.consumer.group-id string The Consumer Group that clymene-ingester will be consuming on behalf of (default "clymene")
--kafka.consumer.kerberos.config-file string Path to Kerberos configuration. i.e /etc/krb5.conf (default "/etc/krb5.conf")
--kafka.consumer.kerberos.keytab-file string Path to keytab file. i.e /etc/security/kafka.keytab (default "/etc/security/kafka.keytab")
--kafka.consumer.kerberos.password string The Kerberos password used for authenticate with KDC
--kafka.consumer.kerberos.realm string Kerberos realm
--kafka.consumer.kerberos.service-name string Kerberos service name (default "kafka")
--kafka.consumer.kerberos.use-keytab Use of keytab instead of password, if this is true, keytab file will be used instead of password
--kafka.consumer.kerberos.username string The Kerberos username used for authenticate with KDC
--kafka.consumer.plaintext.mechanism string The plaintext Mechanism for SASL/PLAIN authentication, e.g. 'SCRAM-SHA-256' or 'SCRAM-SHA-512' or 'PLAIN' (default "PLAIN")
--kafka.consumer.plaintext.password string The plaintext Password for SASL/PLAIN authentication
--kafka.consumer.plaintext.username string The plaintext Username for SASL/PLAIN authentication
--kafka.consumer.promtail.topic string The name of the promtail kafka topic to consume from (default "clymene-logs")
--kafka.consumer.protocol-version string Kafka protocol version - must be supported by kafka server
--kafka.consumer.tls.ca string Path to a TLS CA (Certification Authority) file used to verify the remote server(s) (by default will use the system truststore)
--kafka.consumer.tls.cert string Path to a TLS Certificate file, used to identify this process to the remote server(s)
--kafka.consumer.tls.enabled Enable TLS when talking to the remote server(s)
--kafka.consumer.tls.key string Path to a TLS Private Key file, used to identify this process to the remote server(s)
--kafka.consumer.tls.server-name string Override the TLS server name we expect in the certificate of the remote server(s)
--kafka.consumer.tls.skip-host-verify (insecure) Skip server's certificate chain and host name verification
--log-level string Minimal allowed log Level. For more levels see https://github.com/uber-go/zap (default "info")
--metrics-backend string Defines which metrics backend to use for metrics reporting: expvar, prometheus, none (default "prometheus")
--metrics-http-route string Defines the route of HTTP endpoint for metrics backends that support scraping (default "/metrics")
--promtail-ingester.deadlockInterval duration Interval to check for deadlocks. If no messages gets processed in given time, clymene-ingester app will exit. Value of 0 disables deadlock check. (default 0s)
--promtail-ingester.parallelism string The number of messages to process in parallel (default "1000")
```

## How to set up the Storage Type

#### 1. Setting environmental variables

ElasticSearch
```
STORAGE_TYPE=elasticsearch
```
Loki
```
STORAGE_TYPE=loki
```

Promtail-gateway
```
STORAGE_TYPE=gateway
```

Kafka
```
STORAGE_TYPE=kafka
```

#### 2. Option description by storage type

- [ElasticSearch option](../clymene-promtail/elasticsearch/es-option.md)
- [Loki option](../clymene-promtail/loki/loki-option.md)
- [Kafka option](../clymene-promtail/kafka/kafka-option.md)
- [Promtail-gateway](../clymene-promtail/gateway/gateway-option.md)


### Docker-compose Example

```yaml
version: '2'
services:
promtail-ingester:
image: bourbonkk/promtail-ingester:latest
ports:
- "15694:15694"
environment:
# - STORAGE_TYPE=elasticsearch,prometheus # use composite writer
- STORAGE_TYPE=elasticsearch
command:
- --log-level=debug
- --kafka.consumer.brokers=[KAFKA-IP]:9092
- --es.server-urls=http://[ELASTICSEARCH-IP]:9200
# - --prometheus.remote.url=http://prometheus:9090/api/v1/write
```

### k8s Example

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: promtail-ingester
namespace: clymene
labels:
app: promtail-ingester
spec:
selector:
matchLabels:
app: promtail-ingester
replicas: 1
template:
metadata:
labels:
app: promtail-ingester
spec:
containers:
- name: promtail-ingester
image: bourbonkk/promtail-ingester:latest
imagePullPolicy: Always
ports:
- containerPort: 15694
args:
- --es.server-urls=http://[ELASTICSEARCH-IP]:9200
- --log-level=info
- --kafka.consumer.brokers=clymene-kafka-broker:9092
env:
- name: STORAGE_TYPE
value: elasticsearch
securityContext:
runAsUser: 1000
```

0 comments on commit e71e2db

Please sign in to comment.