Skip to content

Commit

Permalink
(feat) enable ELK stack + Cerebro
Browse files Browse the repository at this point in the history
  • Loading branch information
jonathanlermitage committed Dec 9, 2018
1 parent 22b4472 commit 6b76e37
Show file tree
Hide file tree
Showing 12 changed files with 343 additions and 61 deletions.
46 changes: 32 additions & 14 deletions DEPLOY.md
Expand Up @@ -11,10 +11,21 @@ First, go to project's root and make the `./do` utility script executable if nee
* Package and run application via `./do rd`. Application will start on port 8080 with `dev` Spring profile.
* To run with another Spring profile (e.g. `prod`), package application via `./do p`, go to `target/` directory and run `java -jar -Xms128m -Xmx512m -Dspring.profiles.active=prod manon.jar`.

### Docker Compose
### Docker Compose (application + nginx + log analysis via ELK + Cerebro)

Application dockerized with [Jib](https://github.com/GoogleContainerTools/jib) and [Distroless](https://github.com/GoogleContainerTools/distroless) + [MongoDB Community](https://www.mongodb.com/download-center/community) database + [Nginx](http://nginx.org/en/download.html) as HTTP proxy. To proceed, follow these steps:
Application dockerized with [Jib](https://github.com/GoogleContainerTools/jib) and [Distroless](https://github.com/GoogleContainerTools/distroless) + [MongoDB Community](https://www.mongodb.com/download-center/community) and [MariaDB](https://downloads.mariadb.org/) databases + [Nginx](http://nginx.org/en/download.html) as HTTP proxy, and an ELK stack to parse logs. To proceed, follow these steps:

#### Preparation: create directories and install software

* Elasticsearch may need `sudo sysctl -w vm.max_map_count=262144`.
* Create data and log directories with read/write permissions:
```bash
mkdir ~/manon-app-logs
mkdir ~/manon-mongo-db
mkdir ~/manon-maria-db
mkdir ~/manon-nginx-logs
mkdir ~/manon-elastic-db
```
* Install **Docker**:
```bash
# install Docker Community Edition, tested on Lubuntu 18.04 LTS
Expand All @@ -34,20 +45,14 @@ Application dockerized with [Jib](https://github.com/GoogleContainerTools/jib) a
sudo curl -L "https://github.com/docker/compose/releases/download/1.23.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
```

#### Build and deploy application

* Build and install application image:
* via Jib:
```
./do jib
```
* or via traditional `Dockerfile`:
```
./do docker
```
* via Jib: `./do jib`.
* or via traditional `Dockerfile`: `./do docker`.
* Edit `docker-compose.yml` if needed (e.g. to customize ports).
* Then run application image and dependencies via Docker Compose:
```bash
docker-compose up -d
```
* Then run application image and dependencies via Docker Compose: `./do up` (it actually does: `docker-compose -f ./config/docker/docker-compose.yml up -d`).
* MongoDB data is persisted in `~/manon-mongo-db/`. Application listen on port 8080 and its logs are stored in `~/manon-app-logs/`.
* Optional: install MongoDB command-line client and check database connectivity:
```bash
Expand All @@ -58,3 +63,16 @@ Application dockerized with [Jib](https://github.com/GoogleContainerTools/jib) a
* Replace `8080` by `8000` to access application via Nginx proxy.
* Check Nginx error and access logs in `~/manon-nginx-logs`.
* Launch a batch (e.g. `userSnapshotJob`) `curl -X POST http://localhost:8000/api/v1/sys/batch/start/userSnapshotJob --user ROOT:woot` then check the `UserStats` and `UserSnapshot` MongoDB collections (connect to database, then do `db.UserStats.find()` and `db.UserSnapshot.find()`).

#### Deploy ELK stack and Cerebro

* Run ELK stack images via Docker Compose: `./do upelk`.
* Visit `http://localhost:5601` and go to `Dev Tools`. You can now send queries to Elasticsearch to find some logs:
* Get application logs via: `GET /manon-app-*/_search`.
* Get Nginx access logs via: `GET /manon-nginx-access-*/_search`.
* You can delete these logs via: `DELETE /manon*`. Play with application and show logs again.

* Optional: run Cerebro via Docker Compose: `./do upcerebro -d`.
* Visit `http://localhost:9000` and select `Main Cluster` (it's an alias for `http://elasticsearch:9200`, see `config/docker/cerebro/cerebro.conf` file for details).

You can now stop images via `./do stopcerebro` (Cerebro), `./do stopelk` (ELK stack), `./do stop` (application and dependencies).
48 changes: 27 additions & 21 deletions README.md
Expand Up @@ -68,29 +68,35 @@ Linkedin profile: [jonathan-lermitage-092711142](https://www.linkedin.com/in/jon

You can use the `do.cmd` (Windows) or `./do` (Linux Bash) script:
```
do help show this help message
do t test without code coverage (with embedded MongoDB)
do tc test with code coverage (with embedded MongoDB)
do sc compute and upload Sonar analysis to SonarCloud, needs two env vars:
- TK1_MANON_SONAR_ORGA SonarCloud organization, e.g. jonathanlermitage-github
- TK1_MANON_SONAR_LOGIN SonarCloud authentication token
do tsc similar to "do tc" then "do sc"
do b build without testing
do c clean
do p package application to manon.jar
do rd package and run application with dev profile
do w 3.5.2 set or upgrade Maven wrapper to 3.5.2
do cv check plugins and dependencies versions
do uv update plugins and dependencies versions
do dt show dependencies tree
do rmi stop Docker application, then remove its containers and images
do cdi clean up dangling Docker images
do docker build Docker image with Dockerfile to a Docker daemon
do jib build Docker image with Jib to a Docker daemon
do jibtar build and save Docker image with Jib to a tarball
do help show this help message
do t test without code coverage (with embedded MongoDB)
do tc test with code coverage (with embedded MongoDB)
do sc compute and upload Sonar analysis to SonarCloud, needs two env vars:
- TK1_MANON_SONAR_ORGA SonarCloud organization, e.g. jonathanlermitage-github
- TK1_MANON_SONAR_LOGIN SonarCloud authentication token
do tsc similar to "do tc" then "do sc"
do b build without testing
do c clean
do p package application to manon.jar
do rd package and run application with dev profile
do w 3.5.2 set or upgrade Maven wrapper to 3.5.2
do cv check plugins and dependencies versions
do uv update plugins and dependencies versions
do dt show dependencies tree
do rmi stop Docker application, then remove its containers and images
do cdi clean up dangling Docker images
do docker build Docker image with Dockerfile to a Docker daemon
do jib build Docker image with Jib to a Docker daemon
do jibtar build and save Docker image with Jib to a tarball
do up create and start containers via docker-compose
do stop stop containers via docker-compose
do upelk create and start ELK containers via docker-compose
do stopelk stop ELK containers via docker-compose
do upcerebro create and start Cerebro container via docker-compose
do stopcerebro stop Cerebro container via docker-compose
```

Nota: Linux Bash script can chain parameters, e.g.: `./do.sh cdi rmi w 3.6.0 c tc docker`.
Nota: Linux Bash script can chain parameters, e.g.: `./do.sh cdi rmi w 3.6.0 c tc docker up`.

## License

Expand Down
77 changes: 77 additions & 0 deletions config/docker/cerebro/cerebro.conf
@@ -0,0 +1,77 @@
# Secret will be used to sign session cookies, CSRF tokens and for other encryption utilities.
# It is highly recommended to change this value before running cerebro in production.
secret = "ki:s:[[@=Ag?QI`W2jMwkY:eqvrJ]JqoJyi2axj3ZvOv^/KavOT4ViJSv?6YY4[N"

# Application base path
basePath = "/"

# Defaults to RUNNING_PID at the root directory of the app.
# To avoid creating a PID file set this value to /dev/null
#pidfile.path = "/var/run/cerebro.pid"
pidfile.path=/dev/null

# Rest request history max size per user
rest.history.size = 50 // defaults to 50 if not specified

# Path of local database file
#data.path: "/var/lib/cerebro/cerebro.db"
data.path = "./cerebro.db"

es = {
gzip = true
}

# Authentication
auth = {
# Example of LDAP authentication
#type: ldap
#settings: {
#url = "ldap://host:port"
#base-dn = "ou=active,ou=Employee"
# OpenLDAP might be something like
#base-dn = "ou=People,dc=domain,dc=com"
# Usually method should be left as simple
# Otherwise, set it to the SASL mechanisms to try
#method = "simple"
# Usernames in the form of email addresses (containing @) are passed through unchanged
# Set user-domain to append @user-domain to bare usernames
#user-domain = "domain.com"
# Or leave empty to use user-format formatting
#user-domain = ""
# user-format executes a string.format() operation where
# username is passed in first, followed by base-dn
# Leave username unchanged
#user-format = "%s"
# Like setting user-domain
#user-format = "%s@domain.com"
# Common for OpenLDAP
#user-format = "uid=%s,%s"
#}
# Example of simple username/password authentication
#type: basic
#settings: {
#username = "admin"
#password = "1234"
#}
}

# A list of known hosts
hosts = [
#{
# host = "http://localhost:9200"
# name = "Some Cluster"
#},
# Example of host with authentication
#{
# host = "http://some-authenticated-host:9200"
# name = "Secured Cluster"
# auth = {
# username = "username"
# password = "secret-password"
# }
#}
{
host = "http://elasticsearch:9200"
name = "Main Cluster"
}
]
15 changes: 15 additions & 0 deletions config/docker/docker-compose-cerebro.yml
@@ -0,0 +1,15 @@
version: '3'

services:

# ----------------------------------------
# --- Enrich ELK stack with Cerebro
# ----------------------------------------

cerebro:
container_name: cerebro
image: yannart/cerebro:0.8.1 # see https://hub.docker.com/r/yannart/cerebro/
ports:
- "9000:9000"
volumes:
- ./cerebro/cerebro.conf:/opt/cerebro/conf/application.conf:ro
49 changes: 49 additions & 0 deletions config/docker/docker-compose-elk.yml
@@ -0,0 +1,49 @@
version: '3'

services:

# ----------------------------------------
# --- ELK stack
# ----------------------------------------

elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.5.2 # see https://hub.docker.com/_/elasticsearch/
ports:
- "9200:9200"
- "9300:9300"
volumes:
- ~/manon-elastic-db:/usr/share/elasticsearch/data
- ./elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
environment:
ES_JAVA_OPTS: "-Xms32m -Xmx256m"
ulimits:
memlock:
soft: -1
hard: -1

logstash:
container_name: logstash
image: docker.elastic.co/logstash/logstash-oss:6.5.2 # see https://hub.docker.com/_/logstash/
ports:
- "5000:5000"
- "9600:9600"
volumes:
- ~/manon-app-logs/:/manon-app/
- ~/manon-nginx-logs/:/manon-nginx/
- ./logstash/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
environment:
LS_JAVA_OPTS: "-Xms32m -Xmx256m"
depends_on:
- elasticsearch

kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana-oss:6.5.2 # see https://hub.docker.com/_/kibana/
ports:
- "5601:5601"
volumes:
- ./kibana:/usr/share/kibana/config:ro
depends_on:
- elasticsearch
20 changes: 12 additions & 8 deletions docker-compose.yml → config/docker/docker-compose.yml
Expand Up @@ -2,7 +2,10 @@ version: '3'

services:

# Main application
# ----------------------------------------
# --- Application
# ----------------------------------------

manon:
container_name: manon
image: lermitage-manon:1.0.0-SNAPSHOT
Expand All @@ -14,7 +17,6 @@ services:
- mongo
- maria

# MongoDB database
mongo:
container_name: mongo
image: mongo:4.0.4-xenial # see https://hub.docker.com/_/mongo/
Expand All @@ -28,10 +30,9 @@ services:
MONGO_INITDB_ROOT_PASSWORD: woot
MONGO_INITDB_DATABASE: admin

# MariaDB database
maria:
container_name: maria
image: mariadb:10.3.10-bionic # see https://hub.docker.com/_/mariadb/
image: mariadb:10.3.11-bionic # see https://hub.docker.com/_/mariadb/
ports:
- "3306:3306"
volumes:
Expand All @@ -40,14 +41,17 @@ services:
MYSQL_ROOT_PASSWORD: woot
MYSQL_DATABASE: manon

# HTTP proxy
# ----------------------------------------
# --- Frontend proxy
# ----------------------------------------

nginx:
container_name: nginx
image: nginx:1.15.6 # see https://hub.docker.com/_/nginx/
image: nginx:1.15.7 # see https://hub.docker.com/_/nginx/
ports:
- "8000:80"
volumes:
- ~/manon-nginx-logs:/var/log/nginx
- ./config/docker/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
depends_on:
- manon
- manon # to proxy app
16 changes: 16 additions & 0 deletions config/docker/elasticsearch/elasticsearch.yml
@@ -0,0 +1,16 @@
---
## Default Elasticsearch configuration from elasticsearch-docker.
## from https://github.com/elastic/elasticsearch-docker/blob/master/build/elasticsearch/elasticsearch.yml
#
cluster.name: "docker-cluster"
network.host: 0.0.0.0

# minimum_master_nodes need to be explicitly set when bound on a public IP
# set to 1 to allow single node clusters
# Details: https://github.com/elastic/elasticsearch/pull/17288
discovery.zen.minimum_master_nodes: 1

## Use single node discovery in order to disable production mode and avoid bootstrap checks
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
#
discovery.type: single-node
7 changes: 7 additions & 0 deletions config/docker/kibana/kibana.yml
@@ -0,0 +1,7 @@
---
## Default Kibana configuration from kibana-docker.
## from https://github.com/elastic/kibana-docker/blob/master/build/kibana/config/kibana.yml
#
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
6 changes: 6 additions & 0 deletions config/docker/logstash/logstash.yml
@@ -0,0 +1,6 @@
---
## Default Logstash configuration from logstash-docker.
## from https://github.com/elastic/logstash-docker/blob/master/build/logstash/config/logstash-oss.yml
#
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline

0 comments on commit 6b76e37

Please sign in to comment.