Skip to content
Juju Charm - Ceph MON
Branch: master
Clone or download
zzehring Add set require-osd-release command to osd hook.
 To access all ceph features for a new release,
 require-osd-release must be set to the current
 release. Else, features will not be available
 and ceph health gives a warning on luminous.
 Here, we check to see if an osd has upgraded its
 release and notified mon. If so, we run the
 post-upgrade steps when all osds have reached
 the new release. The one (and only) step
 is to set require-osd-release if and only if
 all osds (and mons) have been upgraded to the
 same version.

Get osd release information from ceph_release key in
relation dict.

Add call to set require-osd-release to current release.

Add execute post-upgrade steps func in osd-relations hook.

Add logic for determinig whether to run set
require-osd-release command.

Add logic for checking if all osds and mons have
converged to same release.

Create func to grab all unique osd releases on each unit.

Change-Id: Ia0bc15b3b6d7e8a21fda8e2343d70d9a0024a767
Closes-Bug: #1828630
Latest commit 63b38bf May 20, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
actions Add security-checklist to ceph-mon Mar 13, 2019
files/nagios Creates nrpe check for number of OSDs May 3, 2019
hooks Add set require-osd-release command to osd hook. May 29, 2019
lib/ceph Add setting ``max_objects`` quota and retrieving erasure code profile Mar 4, 2019
templates Disable object skew warnings Dec 11, 2018
tests Migrate charm-ceph-mon testing to Zaza May 16, 2019
unit_tests Add set require-osd-release command to osd hook. May 29, 2019
.gitignore Migrate charm to work with Python3 only Nov 17, 2017
.gitreview OpenDev Migration Patch Apr 19, 2019
.project Add support for Juju network spaces Apr 7, 2016
.pydevproject Add support for Juju network spaces Apr 7, 2016
.stestr.conf Move from .testr.conf to .stestr.conf Nov 30, 2017
.zuul.yaml Added tox environment for gathering coverage Mar 1, 2019
LICENSE Re-license charm as Apache-2.0 Jul 1, 2016
Makefile Tests dir no longer need copy of charmhelpers Oct 10, 2018
README.md Update capitalization typo in readme Mar 20, 2019
TODO Turn on cephx support by default Oct 9, 2012
actions.yaml Add security-checklist to ceph-mon Mar 13, 2019
charm-helpers-hooks.yaml Add security-checklist to ceph-mon Mar 13, 2019
config.yaml Creates nrpe check for number of OSDs May 3, 2019
copyright Re-license charm as Apache-2.0 Jul 1, 2016
hardening.yaml Add hardening support Mar 29, 2016
icon.svg Update charm icon Jul 31, 2017
metadata.yaml Update series metadata Apr 5, 2019
requirements.txt Update requirements Oct 3, 2018
setup.cfg [dosaboy,r=james-page] Add broker functionality Nov 19, 2014
test-requirements.txt Migrate charm-ceph-mon testing to Zaza May 16, 2019
tox.ini Migrate charm-ceph-mon testing to Zaza May 16, 2019

README.md

Overview

Ceph is a distributed storage and network file system designed to provide excellent performance, reliability, and scalability.

This charm deploys a Ceph monitor cluster.

Usage

Boot things up by using:

juju deploy -n 3 ceph-mon

By default the ceph-mon cluster will not bootstrap until 3 service units have been deployed and started; this is to ensure that a quorum is achieved prior to adding storage devices.

Actions

This charm supports pausing and resuming ceph's health functions on a cluster, for example when doing maintenance on a machine. To pause or resume, call:

juju action do --unit ceph-mon/0 pause-health or juju action do --unit ceph-mon/0 resume-health

Scale Out Usage

You can use the Ceph OSD and Ceph Radosgw charms:

Rolling Upgrades

ceph-mon and ceph-osd charms have the ability to initiate a rolling upgrade. This is initiated by setting the config value for source. To perform a rolling upgrade first set the source for ceph-mon. Watch juju status. Once the monitor cluster is upgraded proceed to setting the ceph-osd source setting. Again watch juju status for output. The monitors and osds will sort themselves into a known order and upgrade one by one. As each server is upgrading the upgrade code will down all the monitor or osd processes on that server, apply the update and then restart them. You will notice in the juju status output that the servers will tell you which previous server they are waiting on.

Supported Upgrade Paths

Currently the following upgrade paths are supported using the Ubuntu Cloud Archive:

  • trusty-firefly -> trusty-hammer
  • trusty-hammer -> trusty-jewel

Firefly is available in Trusty, Hammer is in Trusty-Juno (end of life), Trusty-Kilo, Trusty-Liberty, and Jewel is available in Trusty-Mitaka.

For example if the current config source setting is: cloud:trusty-liberty changing that to cloud:trusty-mitaka will initiate a rolling upgrade of the monitor cluster from hammer to jewel.

Edge cases

There's an edge case in the upgrade code where if the previous node never starts upgrading itself then the rolling upgrade can hang forever. If you notice this has happened it can be fixed by setting the appropriate key in the ceph monitor cluster. The monitor cluster will have keys that look like ceph-mon_ip-ceph-mon-0_1484680239.573482_start and ceph-mon_ip-ceph-mon-0_1484680274.181742_stop. What each server is looking for is that stop key to indicate that the previous server upgraded successfully and it's safe to take itself down. If the stop key is not present it will wait 10 minutes, then consider that server dead and move on.

Network Space support

This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.

Network traffic can be bound to specific network spaces using the public (front-side) and cluster (back-side) bindings:

juju deploy ceph-mon --bind "public=data-space cluster=cluster-space"

alternatively these can also be provided as part of a Juju native bundle configuration:

ceph-mon:
  charm: cs:xenial/ceph-mon
  num_units: 1
  bindings:
    public: data-space
    cluster: cluster-space

Please refer to the Ceph Network Reference for details on how using these options effects network traffic within a Ceph deployment.

NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.

NOTE: Existing deployments using ceph-*-network configuration options will continue to function; these options are preferred over any network space binding provided if set.

NOTE: The monitor-hosts field is only used to migrate existing clusters to a juju managed solution and should be left blank otherwise.

Contact Information

Authors

Report bugs on Launchpad

Ceph

Technical Footnotes

This charm uses the new-style Ceph deployment as reverse-engineered from the Chef cookbook at https://github.com/ceph/ceph-cookbooks, although we selected a different strategy to form the monitor cluster. Since we don't know the names or addresses of the machines in advance, we use the relation-joined hook to wait for all three nodes to come up, and then write their addresses to ceph.conf in the "mon host" parameter. After we initialize the monitor cluster a quorum forms quickly, and OSD bringup proceeds.

See the documentation for more information on Ceph monitor cluster deployment strategies and pitfalls.

You can’t perform that action at this time.