Skip to content

Latest commit

 

History

History
112 lines (100 loc) · 3.66 KB

README.md

File metadata and controls

112 lines (100 loc) · 3.66 KB

Openstack workload monitoring tool

Description

This solution allows to prepare test workload within environment and monitor instances availability during scheduled operations such as ceph or contrail upgrades.

By default ansible playbook creates one basic monitor VM that has prometheus, alertmanager and alerta as docker compose services at the top of itself. In addition it generates dedicated count of dummy VMs with prometheus exporter that used as monitoring goals to check the instances availability.

Architecture diagram

Deployment Guide

Clone git repo to the local machine

git clone https://github.com/dpovolotskiy/openstack-workload-monitoring/

Create virtualenv, activate it and install requirements

virtualenv -p /usr/bin/python3 venv
source venv/bin/activate
pip instal -r requirements.txt

Build master/minion images

Run diskimage-builder/build-master-image.sh and diskimage-builder/build-slave-image.sh scripts

Create images in OpenStack

Copy images from diskimage-builder/images/ directory (of local machine) to one of the OpenStack controllers and create images in OpenStack:

openstack image create --disk-format qcow2 --container-format bare --public --file PATH_TO_IMAGE IMAGE_NAME

Create "mons" project and add admin to "mons" project

openstack project create mons
openstack role add --user admin --project mons admin

Start docker container with ubuntu 18.04 on bmk (for MCP) node

docker run -it -d --network host --name mons ubuntu:18.04

Enter into docker container and install required packages

docker exec -ti mons /bin/bash 
apt install git vim python-pip

Clone git repo to the docker container and install requirements

git clone https://github.com/dpovolotskiy/openstack-workload-monitoring/
pip instal -r requirements.txt

Fill up clouds.yaml and roles/create_os_resources/vars/main.yaml

Prepare roles/create_os_resources/vars/main.yaml

You need to fill up vars file. Content example:
#Set master/slave image ids
slave_image_id: 75b4cd0a-9711-4257-aaab-b4e3668a6259
master_image_id: 2840ac50-49cc-4de1-b18e-6d507e3704da
internal_network_cidr: 10.0.0.0/24
#Set public network id
public_net_id: d662521e-a3d9-420d-9dc5-8a784b2efe22
#Set public network name
floating_ip_pools: net_ext
#Set AZ
availability_zone: nova
number_of_random_instances: 2
assign_floating_ip_daemon_set_enabled: true
assign_floating_ip_random_set_enabled: false
check_internet_access_daemon_set_enabled: true
check_internet_access_random_set_enabled: true
static_routes_to_snat_vms_set_enabled: false
add_google_dns_enabled: true
batch_size: 2
master_instance_name: "prometheus-master"
slave_instance_name: "prometheus-minion"

Prepare clouds.yaml

You need to fill up and put clouds.yaml file to root project folder. Content example:

clouds:
  devstack:
    auth:
      auth_url: http://192.168.122.10:35357/
      project_name: demo
      username: demo
      password: 0penstack
    region_name: RegionOne
  ds-admin:
    auth:
      auth_url: http://192.168.122.10:35357/
      project_name: admin
      username: admin
      password: 0penstack
    region_name: RegionOne
  infra:
    cloud: rackspace
    auth:
      project_id: 275610
      username: openstack
      password: xyzpdq!lazydog
    region_name: DFW,ORD,IAD
    interface: internal

More information about clouds.yaml you may find in the official documentation.

Start ansible playbook

From root project folder execute:

ansible-playbook -i hosts main.yaml