Juju Charm - Nova Cloud Controller
Clone or download
javacruft py3: Don't purge or install python-six
Drop python-six from the list of packages to install/purge on
upgrade to py3 workload execution.

This was a legacy requirement in obsolete packaging versions
of Nova.

Purging six has the side effect of removing crmsh from the unit
in clustered deployments.

Change-Id: I9dfe7b031562c8f3e85445486ba16b3c7b2465ca
Latest commit 1d89461 Oct 15, 2018
Permalink
Failed to load latest commit information.
actions Remove orphan symlink May 28, 2018
files [bradm] Removed nagios check files that were moved to nrpe-external-m… Nov 18, 2014
hooks py3: Don't purge or install python-six Oct 15, 2018
lib Update tox.ini files from release-tools gold copy Sep 9, 2016
scripts Sync scripts/. Apr 9, 2013
templates Merge "Accept lists in pci-alias charm config" Oct 10, 2018
tests Accept lists in pci-alias charm config Oct 5, 2018
unit_tests Merge "Accept lists in pci-alias charm config" Oct 10, 2018
.coveragerc Check in start of py redux. Aug 2, 2013
.gitignore Block endpoint reg if cluster partially formed Oct 3, 2017
.gitreview Add gitreview prior to migration to openstack Feb 24, 2016
.project Fixup nvp- prefix in pydev files Nov 8, 2013
.pydevproject Fixup nvp- prefix in pydev files Nov 8, 2013
.testr.conf Add tox support Oct 30, 2015
.zuul.yaml import zuul job settings from project-config Sep 11, 2018
LICENSE Re-license charm as Apache-2.0 Jul 3, 2016
Makefile Update repo to do ch-sync from Git Sep 26, 2017
README.md Drop postgresql support Dec 21, 2017
actions.yaml Add action for running archive-deleted-rows Jan 12, 2018
bindep.txt Remove charm-helpers from tests dir and use venv instead May 26, 2017
charm-helpers-hooks.yaml Update repo to do ch-sync from Git Sep 26, 2017
config.yaml Add nova-metadata service Oct 3, 2018
copyright Re-license charm as Apache-2.0 Jul 3, 2016
hardening.yaml Add hardening support Mar 31, 2016
icon.svg Update charm icon Aug 2, 2017
metadata.yaml Add support for cells v2 Oct 5, 2018
requirements.txt Add nova-metadata service Oct 3, 2018
revision [ivoks,r=] Add support for setting neutron-alchemy-flags Jul 16, 2014
setup.cfg [yolanda] Add postgresql support Mar 31, 2014
test-requirements.txt Update requirements Oct 3, 2018
tox.ini py3: Switch to using Python 3 for rocky or later Oct 4, 2018

README.md

nova-cloud-controller

Cloud controller node for OpenStack nova. Contains nova-schedule, nova-api, nova-network and nova-objectstore.

If console access is required then console-proxy-ip should be set to a client accessible IP that resolves to the nova-cloud-controller. If running in HA mode then the public vip is used if console-proxy-ip is set to local. Note: The console access protocol is baked into a guest when it is created, if you change it then console access for existing guests will stop working

HA/Clustering

There are two mutually exclusive high availability options: using virtual IP(s) or DNS. In both cases, a relationship to hacluster is required which provides the corosync back end HA functionality.

To use virtual IP(s) the clustered nodes must be on the same subnet such that the VIP is a valid IP on the subnet for one of the node's interfaces and each node has an interface in said subnet. The VIP becomes a highly-available API endpoint.

At a minimum, the config option 'vip' must be set in order to use virtual IP HA. If multiple networks are being used, a VIP should be provided for each network, separated by spaces. Optionally, vip_iface or vip_cidr may be specified.

To use DNS high availability there are several prerequisites. However, DNS HA does not require the clustered nodes to be on the same subnet. Currently the DNS HA feature is only available for MAAS 2.0 or greater environments. MAAS 2.0 requires Juju 2.0 or greater. The clustered nodes must have static or "reserved" IP addresses registered in MAAS. The DNS hostname(s) must be pre-registered in MAAS before use with DNS HA.

At a minimum, the config option 'dns-ha' must be set to true and at least one of 'os-public-hostname', 'os-internal-hostname' or 'os-internal-hostname' must be set in order to use DNS HA. One or more of the above hostnames may be set.

The charm will throw an exception in the following circumstances: If neither 'vip' nor 'dns-ha' is set and the charm is related to hacluster If both 'vip' and 'dns-ha' are set as they are mutually exclusive If 'dns-ha' is set and none of the os-{admin,internal,public}-hostname(s) are set

Network Space support

This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.

API endpoints can be bound to distinct network spaces supporting the network separation of public, internal and admin endpoints.

Access to the underlying MySQL instance can also be bound to a specific space using the shared-db relation.

To use this feature, use the --bind option when deploying the charm:

juju deploy nova-cloud-controller --bind "public=public-space internal=internal-space admin=admin-space shared-db=internal-space"

alternatively these can also be provided as part of a juju native bundle configuration:

nova-cloud-controller:
  charm: cs:xenial/nova-cloud-controller
  num_units: 1
  bindings:
    public: public-space
    admin: admin-space
    internal: internal-space
    shared-db: internal-space

NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.

NOTE: Existing deployments using os-*-network configuration options will continue to function; these options are preferred over any network space binding provided if set.