Database as a Service (Trove)
General information about the Openstack Trove component can be found at:
This repository provides the following:
The Bill of Materials document provides a description and representation of Database as a Service that is tuned for OpenPOWER servers. It provides information such as model numbers and feature codes to simplify the ordering process and it provides racking and cabling rules for the preferred layout of servers, switches, and cables.
The Deployment configuration file provides a mapping of servers and switches to software for the purposes of deployment. Each server is mapped to a set of OpenStack based software roles constituting the control plane, compute plane, and storage plane. Each role is defined in terms of operating system based resources such as users and networks that need to be configured to satisfy that role.
The Deployment configuration file must be edited prior to deployment to reflect the actual configuration that is to be installed. This is mostly a matter of making sure that the numbers of servers represented match the number of servers to be installed and that externally visible IP addresses are allocated from a user specified pool, so that the installation is properly integrated into the data center.
The installation process is split into two parts:
> Bare metal installation of Linux > Installation of OpenStack, Trove, Ceph storage, and Operational Management
The Deployment configuration file is fed into the bare metal installation process which is performed by cluster-genesis. The operating system is loaded and configured as specified in the configuration file. Users, networks, and switches are configured during this step. The last step is to invoke a small script that installs OpenStack and Trove which is performed by os-services.
More properly, in the bare metal installation step, only the installation tools for OpenStack and Ceph were installed, not the actual services. The next step is to configure these tools, so that they install the actual services in a prescribed manner so that they fit properly in the data center. The two projects are os-services and ceph-services. See the README files of each project to determine what is required here.
The final step is to invoke cluster-create.sh in the os-services repository to install and configure the cluster. os-services orchestrates the installation process of OpenStack, Trove, Ceph, and Operational Management which are loaded into the first controller that is setup by cluster-genesis.
The OpenStack dashboard may be reached through your browser:
https://<ipaddr from external-floating-ipaddr in the config.yaml>
This recipe also includes an operational management console which is integrated into the OpenStack dashboard. It monitors the cloud infrastructure and shows metrics relates to the capacity, utilization, and health of the cloud infrastructure. It may also be configured to generate alerts when components fail. It is provided through the opsmgr repository.
Only os-services must be configured before invoking create-cluster. For more info, see related projects below.
Passwords may be found in /etc/openstack_deploy/user_secrets*.yml on the first OpenStack controller node.
The toolkit runs on an Ubuntu 16.04 OpenPOWER server or VM that is connected to the internet and management switch in the cluster to be configured.
Read the Bill Of Materials
Rack and cable hardware as indicated
Get a local copy of this repository:
$ git clone git://github.com/open-power-ref-design/dbaas $ cd dbaas $ TAG=$(git describe --tags $(git rev-list --tags --max-count=1)) $ git checkout $TAG $ CFG=$(pwd)/config.yml
Edit the configuration file:
Instructions for editing the file are included in the file itself.
Additional information may be found in the Cluster Genesis User Guide
Validate the configuration file:
$ apt-get install python-pip $ pip install pyyaml $ git clone git://github.com/open-power-ref-design-toolkit/os-services $ cd os-services $ git checkout $TAG $ ./scripts/validate_config.py --file $CFG
Place the configuration file:
$ git clone git://github.com/open-power-ref-design-toolkit/cluster-genesis $ cd cluster-genesis $ git checkout 1.3.0 $ cp $CFG .
Invoke cluster-genesis to perform the bare metal installation process:
Instructions may be found in the Cluster Genesis User Guide identified above.
Wait for cluster-genesis to complete, ~3 hours:
Edit the OpenStack Installer configuration file:
OpenStack installation is performed by OpenStack-Ansible. Instructions for editing the user configuration files of OpenStack is described in general terms in os-services. Instructions specifically for DBaaS service are described below in the Trove Installation section.
Invoke the toolkit again to complete the installation:
Note this command is invoked on the first controller node. The commands listed above are invoked on the deployer node. When cluster-genesis completes, it displays on the screen instructions for invoking the command above.
The Openstack Trove component provides the DBaaS feature.
The following files are installed for Trove:
See README.rst <https://github.com/open-power-ref-design-toolkit/os-services/blob/master/README.rst> in os-services for more details.
The following parameters can be customized:
trove_infra_subnet_alloc_start: "172.29.236.100" trove_infra_subnet_alloc_end: "172.29.236.110"
Trove requires access to the infrastructure network shared by other Openstack components. The above variables need to be set to limit the set of IP addresses that Trove will use from that network. The addresses must belong to the container infrastructure network defined in the inventory file
/etc/openstack_deploy/openstack_user_config.yml. The definition of that network is of the form:
cidr_networks: container: 172.29.236.0/22
NOTE that the
openstack_user_config.ymlfile must contain a
used-ipssection that contains the same address range.
This contains passwords which are generated during the create-cluster phase. Any fields that are manually filled in after the bootstrap-cluster phase will not be touched by the automatic password generator during the create-cluster phase.
Verifying an install
After successful installation, verify that Trove services are running correctly.
Check for existence of Trove container(s) using
lxc-ls -fon the controller nodes. There should be three of them:
- *trove-api* - *trove-conductor* - *trove-taskmanager*
Attach to the utility container using
lxc-attach -n <container name>
Source the environment file:
$ source /root/openrc
Run some sample trove commands and ensure they run without any errors:
$ trove list $ trove datastore-list $ trove flavor-list
The next step is to build Trove guest images containing database software and Trove guest agent software, upload them to Glance, and update the Trove datastore list to map the Glance images to the database versions. Further details of this process can be found at: http://docs.openstack.org/developer/trove/#installation-and-deployment
Recipes for OpenPOWER servers are located here:
Here, you will find several OpenStack based recipes:
The following projects provides services that are used as major building blocks in recipes: