Skip to content

Commit

Permalink
Adding a sample stack for logging and allow the integration with the …
Browse files Browse the repository at this point in the history
…monitoring system.
  • Loading branch information
bvis committed Apr 2, 2017
1 parent 39e6f06 commit d9f459e
Show file tree
Hide file tree
Showing 2 changed files with 111 additions and 77 deletions.
86 changes: 9 additions & 77 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,93 +4,25 @@ A sample image that can be used as a base for collecting Swarm mode metrics in P

## How to use it

You can configure the full system with the next commands, that create the Prometheus, Grafana and exporters services needed.
You can use the provided `docker-compose.yml` file as an example. You can deploy the full stack with the command:

```bash
docker \
network create --driver overlay monitoring

docker \
service create --name cadvisor \
--mode global \
--network monitoring \
--label com.docker.stack.namespace=monitoring \
--container-label com.docker.stack.namespace=monitoring \
--mount type=bind,src=/,dst=/rootfs:ro \
--mount type=bind,src=/var/run,dst=/var/run:rw \
--mount type=bind,src=/sys,dst=/sys:ro \
--mount type=bind,src=/var/lib/docker/,dst=/var/lib/docker:ro \
google/cadvisor:v0.24.1

docker \
service create --name node-exporter \
--mode global \
--network monitoring \
--label com.docker.stack.namespace=monitoring \
--container-label com.docker.stack.namespace=monitoring \
--mount type=bind,source=/proc,target=/host/proc \
--mount type=bind,source=/sys,target=/host/sys \
--mount type=bind,source=/,target=/rootfs \
--mount type=bind,source=/etc/hostname,target=/etc/host_hostname \
-e HOST_HOSTNAME=/etc/host_hostname \
basi/node-exporter \
-collector.procfs /host/proc \
-collector.sysfs /host/sys \
-collector.filesystem.ignored-mount-points "^/(sys|proc|dev|host|etc)($|/)" \
--collector.textfile.directory /etc/node-exporter/ \
--collectors.enabled="conntrack,diskstats,entropy,filefd,filesystem,loadavg,mdadm,meminfo,netdev,netstat,stat,textfile,time,vmstat,ipvs"

docker \
service create --name alertmanager \
--network monitoring \
--label com.docker.stack.namespace=monitoring \
--container-label com.docker.stack.namespace=monitoring \
--publish 9093:9093 \
-e "SLACK_API=https://hooks.slack.com/services/TOKEN-HERE" \
-e "LOGSTASH_URL=http://logstash:8080/" \
basi/alertmanager \
-config.file=/etc/alertmanager/config.yml

docker \
service create \
--name prometheus \
--network monitoring \
--label com.docker.stack.namespace=monitoring \
--container-label com.docker.stack.namespace=monitoring \
--publish 9090:9090 \
basi/prometheus-swarm \
-config.file=/etc/prometheus/prometheus.yml \
-storage.local.path=/prometheus \
-web.console.libraries=/etc/prometheus/console_libraries \
-web.console.templates=/etc/prometheus/consoles \
-alertmanager.url=http://alertmanager:9093

docker \
service create \
--name grafana \
--network monitoring \
--label com.docker.stack.namespace=monitoring \
--container-label com.docker.stack.namespace=monitoring \
--publish 3000:3000 \
-e "GF_SERVER_ROOT_URL=http://grafana.${CLUSTER_DOMAIN}" \
-e "GF_SECURITY_ADMIN_PASSWORD=$GF_PASSWORD" \
-e "PROMETHEUS_ENDPOINT=http://prometheus:9090" \
-e "ELASTICSEARCH_ENDPOINT=$ES_ADDRESS" \
-e "ELASTICSEARCH_USER=$ES_USERNAME" \
-e "ELASTICSEARCH_PASSWORD=$ES_PASSWORD" \
basi/grafana
docker stack deploy --compose-file docker-compose.yml monitoring
```

Once everyting is running you just need to connect to grafana and import the [Docker Swarm & Container Overview](https://grafana.net/dashboards/609)
The grafana by default is exposed in the 3000 port and the credentials are admin/admin, be sure you use something different in your deploys.

In case you don't have an Elasticsearch instance with logs and errors you could provide an invalid configuration. But I suggest you to have it correctly configured to get all the dashboard offers.
Once everything is running you just need to connect to grafana and import the [Docker Swarm & Container Overview](https://grafana.net/dashboards/609)

You can use the provided `docker-compose.yml` file as an example. You can deploy the full stack with the command:
In case you don't have an Elasticsearch instance with logs and errors you could provide an invalid configuration or you could launch the sample stack with ELK.

```bash
docker stack deploy --compose-file docker-compose.yml monitoring
docker stack deploy --compose-file docker-compose.logging.yml logging
```

Be patient, some services can take some minutes to start.
This stack sample is using old versions of Elasticsearch and Kibana intentionally for simplify the configuration.

### Docker Engine Metrics
In case you have activated the metrics endpoint in your docker swarm cluster you could import the [Docker Engine Metrics](https://grafana.net/dashboards/1229) dashboard as well, which offers complementary data about the docker daemon itself.

Expand Down
102 changes: 102 additions & 0 deletions docker-compose.logging.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
version: "3.1"

networks:
logging:
monitoring_monitoring:
external: true

services:
logspout:
image: bekt/logspout-logstash:latest
networks:
- logging
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
ROUTE_URIS: logstash://logstash:5000
DOCKER_LABELS: "true"
deploy:
mode: global
resources:
limits:
cpus: '0.25'
memory: 64M
reservations:
cpus: '0.25'
memory: 32M

logstash:
image: basi/logstash:${LOGSTASH_VERSION:-v0.8.0}
networks:
- logging
environment:
DEBUG: "${LOGSTASH_DEBUG:-false}"
LOGSPOUT: ignore
ELASTICSEARCH_USER: ${ELASTICSEARCH_LOGS_USER}
ELASTICSEARCH_PASSWORD: ${ELASTICSEARCH_LOGS_PASSWORD}
ELASTICSEARCH_SSL: ${ELASTICSEARCH_LOGS_SSL}
ELASTICSEARCH_ADDR: ${ELASTICSEARCH_LOGS_ADDR:-elasticsearch}
ELASTICSEARCH_PORT: ${ELASTICSEARCH_LOGS_PORT:-9200}
deploy:
mode: replicated
replicas: 2
resources:
limits:
cpus: '0.25'
memory: 800M
reservations:
cpus: '0.25'
memory: 400M

elasticsearch:
image: elasticsearch:2
ports:
9200:9200
networks:
- logging
- monitoring_monitoring
environment:
- LOGSPOUT=ignore
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
deploy:
mode: replicated
replicas: 1
resources:
limits:
cpus: '2'
memory: 640M
reservations:
cpus: '0.5'
memory: 512M

kibana:
image: kibana:4
networks:
- logging
ports:
- "5601:5601"
environment:
- LOGSPOUT=ignore
- ELASTICSEARCH_URL=http://elasticsearch:9200
deploy:
mode: replicated
replicas: 1
resources:
limits:
cpus: '0.25'
memory: 384M
reservations:
cpus: '0.25'
memory: 256M

0 comments on commit d9f459e

Please sign in to comment.