diff --git a/source/ceph_storage.rst b/source/ceph_storage.rst index 4ba317b..e45e914 100644 --- a/source/ceph_storage.rst +++ b/source/ceph_storage.rst @@ -16,11 +16,6 @@ Ceph Storage The Ceph deployment is not managed by StackHPC Ltd. -Troubleshooting -=============== - -.. include:: include/ceph_troubleshooting.rst - Working with Ceph deployment tool ================================= @@ -31,3 +26,13 @@ Working with Ceph deployment tool .. ifconfig:: deployment['cephadm'] .. include:: include/cephadm.rst + +Operations +========== + +.. include:: include/ceph_operations.rst + +Troubleshooting +=============== + +.. include:: include/ceph_troubleshooting.rst diff --git a/source/include/ceph_operations.rst b/source/include/ceph_operations.rst new file mode 100644 index 0000000..74bc542 --- /dev/null +++ b/source/include/ceph_operations.rst @@ -0,0 +1,26 @@ + + +Replacing drive +--------------- + +See upstream documentation: +https://docs.ceph.com/en/quincy/cephadm/services/osd/#replacing-an-osd + +In case where disk holding DB and/or WAL fails, it is necessary to recreate +(using replacement procedure above) all OSDs that are associated with this +disk - usually NVMe drive. The following single command is sufficient to +identify which OSDs are tied to which physical disks: + +.. code-block:: console + + ceph# ceph device ls + +Host maintenance +---------------- + +https://docs.ceph.com/en/quincy/cephadm/host-management/#maintenance-mode + +Upgrading +--------- + +https://docs.ceph.com/en/quincy/cephadm/upgrade/ diff --git a/source/include/cephadm.rst b/source/include/cephadm.rst index 7bd0c4a..5130a6b 100644 --- a/source/include/cephadm.rst +++ b/source/include/cephadm.rst @@ -34,14 +34,17 @@ cephadm based playbooks utilising stackhpc.cephadm Ansible Galaxy collection. Running Ceph commands ===================== -Ceph commands can be run via ``cephadm shell`` utility container: +Ceph commands are usually run inside a ``cephadm shell`` utility container: .. code-block:: console ceph# cephadm shell -This command will be only successful on ``mons`` group members (the admin key -is copied only to those nodes). +Operating a cluster requires a keyring with an admin access to be available for Ceph +commands. Cephadm will copy such keyring to the nodes carrying +`_admin `__ +label - present on MON servers by default when using +`StackHPC Cephadm collection `__. Adding a new storage node =========================