New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support deploying Monasca monitoring service on Swarm cluster #6

Merged
merged 1 commit into from May 25, 2018
Jump to file or symbol
Failed to load files and symbols.
+200 −1
Diff settings

Always

Just for now

@@ -0,0 +1,10 @@
#
# Copyright StackHPC, 2018
#
---
- name: Deploy Monasca Swarm monitoring service
hosts: localhost
connection: local
gather_facts: no
roles:
- role: monasca_swarm_service
Copy path View file
@@ -15,7 +15,7 @@ alaska_cloud: alaska
alaska_homedir: /alaska
alaska_softiron: 10.4.99.101
# OpenStack fully qualified project name
# OpenStack fully qualified project name (used for Grafana with domain support)
project_name: p3@default
# Virtual IP address of the controller node
@@ -28,6 +28,16 @@ alaska_monitoring_server: 10.60.253.3
monasca_agent_p3_username: p3-monasca-agent
monasca_agent_p3_password: "{{ vault_monasca_agent_password }}"
# Monasca Swarm service config
monasca_swarm_service_forwarder_port: 17120
monasca_swarm_service_log_level: INFO
monasca_swarm_service_api_uri: http://{{ controller_vip }}:8082/v2.0
monasca_swarm_service_log_api_uri: http://{{ controller_vip }}:5607
monasca_swarm_service_keystone_uri: http://{{ controller_vip }}:5000/v3
monasca_swarm_service_username: "{{ monasca_agent_p3_username }}"
monasca_swarm_service_password: "{{ monasca_agent_p3_password }}"
monasca_swarm_service_project_name: p3
# Local Grafana admin account for configuring Grafana
grafana_admin_username: grafana-admin
grafana_admin_password: "{{ vault_grafana_admin_password }}"
@@ -0,0 +1,15 @@
Monasca Swarm monitoring service
================================
This role assumes the following environment variables have been set
in order to interact with the target Swarm Docker API.
* `DOCKER_HOST`
* `DOCKER_CERT_PATH`
* `DOCKER_TLS_VERIFY`
A script to set these can be generated from the OpenStack CLI:
`mkdir -p ~/swarm-creds && $(openstack coe cluster config <cluster name> --dir ~/swarm-creds --force | tee ~/swarm-creds/env.sh)`

This comment has been minimized.

@brtknr

brtknr May 15, 2018

Member

Perhaps we need to mention the versions of docker we have tested this with, i.e. client and server docker versions need to be 18.03 or higher when you run docker version

This comment has been minimized.

@dougszumski

dougszumski May 15, 2018

Member

I think the use of a recent Docker version should be enforced by the version in the Docker compose file (v3.6 requires Docker 18.02+ onwards). Do we know if 18.02 doesn't work?

This comment has been minimized.

@brtknr

brtknr May 15, 2018

Member

According to these docs, its API v3.6 is supported from 18.02+

This comment has been minimized.

@dougszumski

dougszumski May 15, 2018

Member

Yeah -the compose file included in this PR is v3.6, so i believe the Docker client will complain if it's < 18.02. Did 18.02 not work? If it didn't I can add a check.

This comment has been minimized.

@brtknr

brtknr May 15, 2018

Member

I never tried 18.02, yum installed the latest client at the time which was 18.04, and 18.03 when i was tinkering with the fedora atomic image to have the latest docker version, but i imagine API version should guarantee the correct docker install. My point was more about just adding a note saying that 18.02+ is required on both client and server to support API v3.6, or just point to the docker docs pasted above.

This comment has been minimized.

@dougszumski

dougszumski May 15, 2018

Member

Added a note.

The role requires Docker Engine 18.02.0+. This includes the client running on the localhost.
@@ -0,0 +1,37 @@
---
- name: Create temporary folder for config files
tempfile:
state: directory
suffix: config
register: tmp_folder
- name: Set temporary folder permissions
file:
path: "{{ tmp_folder.path }}"
mode: 0700
- name: Generate Fluentd config
template:
src: fluentd.conf.j2
dest: "{{ tmp_folder.path }}/fluentd.conf"
mode: 0600
- name: Generate Monasca Swarm service Docker compose file
template:
src: monasca_swarm_service.yml.j2
dest: "{{ tmp_folder.path }}/monasca_swarm_service.yml"
mode: 0600
- name: Deploy Monasca Swarm service
# At the time of writing the docker_compose module doesn't support compose v3
# and we want compose v3 for the global deploy mode.
command: "docker stack deploy --compose-file {{ tmp_folder.path }}/monasca_swarm_service.yml monasca-monitoring-stack"
environment:
DOCKER_HOST: "{{ lookup('env','DOCKER_HOST') }}"
DOCKER_CERT_PATH: "{{ lookup('env','DOCKER_CERT_PATH') }}"
DOCKER_TLS_VERIFY: "{{ lookup('env','DOCKER_TLS_VERIFY') }}"
- name: Remove temporary config folder
file:
state: absent
path: "{{ tmp_folder.path }}"
@@ -0,0 +1,44 @@
# Accept logs from Docker Fluentd log driver
<source>
@type forward
port 24224
bind 0.0.0.0
</source>
# Add a timestamp dimension to all logs to record the event time. The
# event time is the time extracted from the log message in all cases
# where the time_key is set, and the time the record entered fluentd
# if no time_key is set.
# logs.
<filter *.**>
@type record_transformer
<record>
timestamp ${time}
</record>
</filter>
# Docker saves all logs under the 'log' field. The fluentd-monasca
# plugin assumes that they are saved under the 'message' field. Here
# we map the 'log' field to the 'message' field for all logs.
<filter *.**>
@type record_transformer
enable_ruby true
<record>
message ${record["log"]}
</record>
remove_keys log
</filter>
<match *.**>
type copy
<store>
@type monasca
keystone_url {{ monasca_swarm_service_keystone_uri }}
monasca_log_api {{ monasca_swarm_service_log_api_uri }}
monasca_log_api_version v3.0
username {{ monasca_swarm_service_username }}
password {{ monasca_swarm_service_password }}
domain_id default
project_name {{ monasca_swarm_service_project_name }}
</store>
</match>
@@ -0,0 +1,83 @@
---
version: "3.6"
networks:
hostnet:
external: true
name: host
configs:
fluentd:
file: {{ tmp_folder.path }}/fluentd.conf
services:
fluentd:
image: stackhpc/monasca-fluentd
networks:
hostnet: {}
deploy:
mode: global
configs:
- source: fluentd
target: "/fluentd/etc/fluent.conf"
mode: 0644
monasca-forwarder:
image: stackhpc/agent-forwarder
hostname: "{% raw %}{{.Node.Hostname}}{% endraw %}"
networks:
hostnet: {}
deploy:
mode: global
environment:
- "LOG_LEVEL={{ monasca_swarm_service_log_level }}"
- "OS_AUTH_URL={{ monasca_swarm_service_keystone_uri }}"
- "OS_USERNAME={{ monasca_swarm_service_username }}"
- "OS_PASSWORD={{ monasca_swarm_service_password }}"
- OS_USER_DOMAIN_NAME=Default
- "OS_PROJECT_NAME={{ monasca_swarm_service_project_name }}"
- OS_PROJECT_DOMAIN_NAME=Default
- "MONASCA_URL={{ monasca_swarm_service_api_uri }}"
- SERVICE_TYPE=monitoring
- ENDPOINT_TYPE=public
- REGION_NAME=RegionOne
- "FORWARDER_URL=http://127.0.0.1:{{ monasca_swarm_service_forwarder_port }}"
- "FORWARDER_PORT={{ monasca_swarm_service_forwarder_port }}"
monasca-collector:
image: stackhpc/agent-collector
hostname: "{% raw %}{{.Node.Hostname}}{% endraw %}"
networks:
hostnet: {}
deploy:
mode: global
environment:
- HOST=true
- DOCKER=true
- DOCKER_ROOT=/rootfs
- DOCKER_SOCKET=unix://var/run/docker.sock
- "LOG_LEVEL={{ monasca_swarm_service_log_level }}"
- "FORWARDER_URL=http://127.0.0.1:{{ monasca_swarm_service_forwarder_port }}"
- "FORWARDER_PORT={{ monasca_swarm_service_forwarder_port }}"
volumes:
- "/:/rootfs"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/lib/docker/:/var/lib/docker:ro"
- "/dev/disk/:/dev/disk:ro"

This comment has been minimized.

@markgoddard

markgoddard May 15, 2018

Member

I've occasionally seen weird issues with stopping and removing containers when mounting in special places like this. Just one to check.

This comment has been minimized.

@dougszumski

dougszumski May 16, 2018

Member

Updated so that DOCKER_ROOT points to /rootfs mounted in from the host. Good spot @markgoddard.

depends_on:
- monasca-forwarder
monasca-statsd:
image: stackhpc/agent-statsd
networks:
hostnet: {}
deploy:
mode: global
environment:
- "LOG_LEVEL={{ monasca_swarm_service_log_level }}"
- "FORWARDER_URL=http://127.0.0.1:{{ monasca_swarm_service_forwarder_port }}"
- "FORWARDER_PORT={{ monasca_swarm_service_forwarder_port }}"
depends_on:
- monasca-forwarder
ProTip! Use n and p to navigate between commits in a pull request.