Skip to content

Commit

Permalink
Merge pull request #20510 from ceph/wip-cdp-docs-update
Browse files Browse the repository at this point in the history
docs update ceph-deploy reference to reflect ceph-volume API

Reviewed-by: Vasu Kulkarni <vakulkar@redhat.com>
  • Loading branch information
alfredodeza committed Feb 22, 2018
2 parents f696104 + cc79607 commit b657528
Show file tree
Hide file tree
Showing 4 changed files with 82 additions and 165 deletions.
97 changes: 24 additions & 73 deletions doc/man/8/ceph-deploy.rst
Expand Up @@ -15,11 +15,7 @@ Synopsis
| **ceph-deploy** **mon** *create-initial*
| **ceph-deploy** **osd** *prepare* [*ceph-node*]:[*dir-path*]
| **ceph-deploy** **osd** *activate* [*ceph-node*]:[*dir-path*]
| **ceph-deploy** **osd** *create* [*ceph-node*]:[*dir-path*]
| **ceph-deploy** **osd** *create* *--data* *device* *ceph-node*
| **ceph-deploy** **admin** [*admin-node*][*ceph-node*...]
Expand Down Expand Up @@ -251,93 +247,48 @@ Subcommand ``list`` lists disk partitions and Ceph OSDs.

Usage::

ceph-deploy disk list [HOST:[DISK]]

Here, [HOST] is hostname of the node and [DISK] is disk name or path.

Subcommand ``prepare`` prepares a directory, disk or drive for a Ceph OSD. It
creates a GPT partition, marks the partition with Ceph type uuid, creates a
file system, marks the file system as ready for Ceph consumption, uses entire
partition and adds a new partition to the journal disk.

Usage::

ceph-deploy disk prepare [HOST:[DISK]]

Here, [HOST] is hostname of the node and [DISK] is disk name or path.

Subcommand ``activate`` activates the Ceph OSD. It mounts the volume in a
temporary location, allocates an OSD id (if needed), remounts in the correct
location ``/var/lib/ceph/osd/$cluster-$id`` and starts ``ceph-osd``. It is
triggered by ``udev`` when it sees the OSD GPT partition type or on ceph service
start with ``ceph disk activate-all``.

Usage::

ceph-deploy disk activate [HOST:[DISK]]

Here, [HOST] is hostname of the node and [DISK] is disk name or path.

Subcommand ``zap`` zaps/erases/destroys a device's partition table and contents.
It actually uses ``sgdisk`` and it's option ``--zap-all`` to destroy both GPT and
MBR data structures so that the disk becomes suitable for repartitioning.
``sgdisk`` then uses ``--mbrtogpt`` to convert the MBR or BSD disklabel disk to a
GPT disk. The ``prepare`` subcommand can now be executed which will create a new
GPT partition.

Usage::

ceph-deploy disk zap [HOST:[DISK]]
ceph-deploy disk list HOST

Here, [HOST] is hostname of the node and [DISK] is disk name or path.

Subcommand ``zap`` zaps/erases/destroys a device's partition table and
contents. It actually uses ``ceph-volume lvm zap`` remotely, alternatively
allowing someone to remove the Ceph metadata from the logical volume.

osd
---

Manage OSDs by preparing data disk on remote host. ``osd`` makes use of certain
subcommands for managing OSDs.

Subcommand ``prepare`` prepares a directory, disk or drive for a Ceph OSD. It
first checks against multiple OSDs getting created and warns about the
possibility of more than the recommended which would cause issues with max
allowed PIDs in a system. It then reads the bootstrap-osd key for the cluster or
writes the bootstrap key if not found. It then uses :program:`ceph-disk`
utility's ``prepare`` subcommand to prepare the disk, journal and deploy the OSD
on the desired host. Once prepared, it gives some time to the OSD to settle and
checks for any possible errors and if found, reports to the user.
Subcommand ``create`` prepares a device for Ceph OSD. It first checks against
multiple OSDs getting created and warns about the possibility of more than the
recommended which would cause issues with max allowed PIDs in a system. It then
reads the bootstrap-osd key for the cluster or writes the bootstrap key if not
found.
It then uses :program:`ceph-volume` utility's ``lvm create`` subcommand to
prepare the disk, (and journal if using filestore) and deploy the OSD on the desired host.
Once prepared, it gives some time to the OSD to start and checks for any
possible errors and if found, reports to the user.

Usage::

ceph-deploy osd prepare HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
Bluestore Usage::

Subcommand ``activate`` activates the OSD prepared using ``prepare`` subcommand.
It actually uses :program:`ceph-disk` utility's ``activate`` subcommand with
appropriate init type based on distro to activate the OSD. Once activated, it
gives some time to the OSD to start and checks for any possible errors and if
found, reports to the user. It checks the status of the prepared OSD, checks the
OSD tree and makes sure the OSDs are up and in.
ceph-deploy osd create --data DISK HOST

Usage::
Filestore Usage::

ceph-deploy osd activate HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
ceph-deploy osd create --data DISK --journal JOURNAL HOST

Subcommand ``create`` uses ``prepare`` and ``activate`` subcommands to create an
OSD.

Usage::

ceph-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
.. note:: For other flags available, please see the man page or the --help menu
on ceph-deploy osd create

Subcommand ``list`` lists disk partitions, Ceph OSDs and prints OSD metadata.
It gets the osd tree from a monitor host, uses the ``ceph-disk-list`` output
and gets the mount point by matching the line where the partition mentions
the OSD name, reads metadata from files, checks if a journal path exists,
if the OSD is in a OSD tree and prints the OSD metadata.
Subcommand ``list`` lists devices associated to Ceph as part of an OSD.
It uses the ``ceph-volume lvm list`` output that has a rich output, mapping
OSDs to devices and other interesting information about the OSD setup.

Usage::

ceph-deploy osd list HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
ceph-deploy osd list HOST


admin
Expand Down
70 changes: 18 additions & 52 deletions doc/rados/deployment/ceph-deploy-osd.rst
Expand Up @@ -21,7 +21,7 @@ before building out a large cluster. See `Data Storage`_ for additional details.
List Disks
==========

To list the disks on a node, execute the following command::
To list the disks on a node, execute the following command::

ceph-deploy disk list {node-name [node-name]...}

Expand All @@ -38,80 +38,46 @@ execute the following::
.. important:: This will delete all data.


Prepare OSDs
============
Create OSDs
===========

Once you create a cluster, install Ceph packages, and gather keys, you
may prepare the OSDs and deploy them to the OSD node(s). If you need to
identify a disk or zap it prior to preparing it for use as an OSD,
may create the OSDs and deploy them to the OSD node(s). If you need to
identify a disk or zap it prior to preparing it for use as an OSD,
see `List Disks`_ and `Zap Disks`_. ::

ceph-deploy osd prepare {node-name}:{data-disk}[:{journal-disk}]
ceph-deploy osd prepare osdserver1:sdb:/dev/ssd
ceph-deploy osd prepare osdserver1:sdc:/dev/ssd
ceph-deploy osd create --data {data-disk} {node-name}

The ``prepare`` command only prepares the OSD. On most operating
systems, the ``activate`` phase will automatically run when the
partitions are created on the disk (using Ceph ``udev`` rules). If not
use the ``activate`` command. See `Activate OSDs`_ for
details.
For example::

The foregoing example assumes a disk dedicated to one Ceph OSD Daemon, and
a path to an SSD journal partition. We recommend storing the journal on
a separate drive to maximize throughput. You may dedicate a single drive
for the journal too (which may be expensive) or place the journal on the
same disk as the OSD (not recommended as it impairs performance). In the
foregoing example we store the journal on a partitioned solid state drive.
ceph-deploy osd create --data /dev/ssd osd-server1

You can use the settings --fs-type or --bluestore to choose which file system
you want to install in the OSD drive. (More information by running
'ceph-deploy osd prepare --help').
For bluestore (the default) the example assumes a disk dedicated to one Ceph
OSD Daemon. Filestore is also supported, in which case a ``--journal`` flag in
addition to ``--filestore`` needs to be used to define the Journal device on
the remote host.

.. note:: When running multiple Ceph OSD daemons on a single node, and
.. note:: When running multiple Ceph OSD daemons on a single node, and
sharing a partioned journal with each OSD daemon, you should consider
the entire node the minimum failure domain for CRUSH purposes, because
if the SSD drive fails, all of the Ceph OSD daemons that journal to it
will fail too.


Activate OSDs
=============

Once you prepare an OSD you may activate it with the following command. ::

ceph-deploy osd activate {node-name}:{data-disk-partition}[:{journal-disk-partition}]
ceph-deploy osd activate osdserver1:/dev/sdb1:/dev/ssd1
ceph-deploy osd activate osdserver1:/dev/sdc1:/dev/ssd2

The ``activate`` command will cause your OSD to come ``up`` and be placed
``in`` the cluster. The ``activate`` command uses the path to the partition
created when running the ``prepare`` command.


Create OSDs
===========

You may prepare OSDs, deploy them to the OSD node(s) and activate them in one
step with the ``create`` command. The ``create`` command is a convenience method
for executing the ``prepare`` and ``activate`` command sequentially. ::

ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]
ceph-deploy osd create osdserver1:sdb:/dev/ssd1

.. List OSDs
.. =========
List OSDs
=========

.. To list the OSDs deployed on a node(s), execute the following command::
To list the OSDs deployed on a node(s), execute the following command::

.. ceph-deploy osd list {node-name}
ceph-deploy osd list {node-name}


Destroy OSDs
============

.. note:: Coming soon. See `Remove OSDs`_ for manual procedures.

.. To destroy an OSD, execute the following command::
.. To destroy an OSD, execute the following command::
.. ceph-deploy osd destroy {node-name}:{path-to-disk}[:{path/to/journal}]
Expand Down

0 comments on commit b657528

Please sign in to comment.