Skip to content

Commit

Permalink
Merge pull request #17425 from ceph/wip-ceph-volume-docs
Browse files Browse the repository at this point in the history
docs: ceph-volume CLI updates

Reviewed-by: Andrew Schoen <aschoen@redhat.com>
  • Loading branch information
alfredodeza committed Sep 1, 2017
2 parents 0dbd0b9 + dc2f1ff commit 3831d40
Show file tree
Hide file tree
Showing 3 changed files with 74 additions and 10 deletions.
6 changes: 5 additions & 1 deletion doc/ceph-volume/intro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,8 @@ them.
-------------------
By making use of :term:`LVM tags`, the :ref:`ceph-volume-lvm` sub-command is
able to store and later re-discover and query devices associated with OSDs so
that they can later activated.
that they can later activated. This includes support for lvm-based technologies
like dm-cache as well.

For ``ceph-volume``, the use of dm-cache is transparent, there is no difference
for the tool, and it treats dm-cache like a plain logical volume.
12 changes: 8 additions & 4 deletions doc/ceph-volume/lvm/activate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,17 +31,17 @@ the same id exists and would end up activating the incorrect one.

Discovery
---------
With either existing OSDs or new ones being activated, a *discovery* process is
With OSDs previously created by ``ceph-volume``, a *discovery* process is
performed using :term:`LVM tags` to enable the systemd units.

The systemd unit will capture the :term:`OSD id` and :term:`OSD uuid` and
persist it. Internally, the activation will enable it like::

systemctl enable ceph-volume@$id-$uuid-lvm
systemctl enable ceph-volume@lvm-$id-$uuid

For example::

systemctl enable ceph-volume@0-8715BEB4-15C5-49DE-BA6F-401086EC7B41-lvm
systemctl enable ceph-volume@lvm-0-8715BEB4-15C5-49DE-BA6F-401086EC7B41

Would start the discovery process for the OSD with an id of ``0`` and a UUID of
``8715BEB4-15C5-49DE-BA6F-401086EC7B41``.
Expand All @@ -54,7 +54,11 @@ The systemd unit will look for the matching OSD device, and by looking at its
# mount the device in the corresponding location (by convention this is
``/var/lib/ceph/osd/<cluster name>-<osd id>/``)

# ensure that all required devices are ready for that OSD
# ensure that all required devices are ready for that OSD. In the case of
a journal (when ``--filestore`` is selected) the device will be queried (with
``blkid`` for partitions, and lvm for logical volumes) to ensure that the
correct device is being linked. The symbolic link will *always* be re-done to
ensure that the correct device is linked.

# start the ``ceph-osd@0`` systemd unit

Expand Down
66 changes: 61 additions & 5 deletions doc/ceph-volume/lvm/prepare.rst
Original file line number Diff line number Diff line change
Expand Up @@ -41,10 +41,25 @@ following the minimum size requirements for data and journal.

The API call looks like::

ceph-volume prepare --filestore --data data --journal journal
ceph-volume prepare --filestore --data volume_group/lv_name --journal journal

The journal *must* be a logical volume, just like the data volume, and that
argument is always required even if both live under the same group.
The ``--data`` value *must* be a volume group name and a logical volume name
separated by a ``/``. Since logical volume names are not enforced for
uniqueness, this prevents using the wrong volume. The ``--journal`` can be
either a logical volume *or* a partition.

When using a partition, it *must* contain a ``PARTUUID`` discoverable by
``blkid``, so that it can later be identified correctly regardless of the
device name (or path).

When using a partition, this is how it would look for ``/dev/sdc1``::

ceph-volume prepare --filestore --data volume_group/lv_name --journal /dev/sdc1

For a logical volume, just like for ``--data``, a volume group and logical
volume name are required::

ceph-volume prepare --filestore --data volume_group/lv_name --journal volume_group/journal_lv

A generated uuid is used to ask the cluster for a new OSD. These two pieces are
crucial for identifying an OSD and will later be used throughout the
Expand Down Expand Up @@ -74,6 +89,46 @@ mounted, re-using all the pieces of information from the initial steps::
--osd-uuid <osd uuid> --keyring /var/lib/ceph/osd/<cluster name>-<osd id>/keyring \
--setuser ceph --setgroup ceph


.. _ceph-volume-lvm-partitions:

Partitioning
------------
``ceph-volume lvm`` does not currently create partitions from a whole device.
If using device partitions the only requirement is that they contain the
``PARTUUID`` and that it is discoverable by ``blkid``. Both ``fdisk`` and
``parted`` will create that automatically for a new partition.

For example, using a new, unformatted drive (``/dev/sdd`` in this case) we can
use ``parted`` to create a new partition. First we list the device
information::

$ parted --script /dev/sdd print
Model: VBOX HARDDISK (scsi)
Disk /dev/sdd: 11.5GB
Sector size (logical/physical): 512B/512B
Disk Flags:

This device is not even labeled yet, so we can use ``parted`` to create
a ``gpt`` label before we create a partition, and verify again with ``parted
print``::

$ parted --script /dev/sdd mklabel gpt
$ parted --script /dev/sdd print
Model: VBOX HARDDISK (scsi)
Disk /dev/sdd: 11.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Now lets create a single partition, and verify later if ``blkid`` can find
a ``PARTUUID`` that is needed by ``ceph-volume``::

$ parted --script /dev/sdd mkpart primary 1 100%
$ blkid /dev/sdd1
/dev/sdd1: PARTLABEL="primary" PARTUUID="16399d72-1e1f-467d-96ee-6fe371a7d0d4"


.. _ceph-volume-lvm-existing-osds:

Existing OSDs
Expand All @@ -92,8 +147,9 @@ already running there are a few things to take into account:
be removed (like fstab mount points)
* There is currently no support for encrypted volumes

The one time process for an existing OSD, with an ID of 0 and
using a ``"ceph"`` cluster name would look like::
The one time process for an existing OSD, with an ID of 0 and using
a ``"ceph"`` cluster name would look like (the following command will **destroy
any data** in the OSD)::

ceph-volume lvm prepare --filestore --osd-id 0 --osd-fsid E3D291C1-E7BF-4984-9794-B60D9FA139CB

Expand Down

0 comments on commit 3831d40

Please sign in to comment.