From c53859ae653bb0c032640c34d54391f2263e31f8 Mon Sep 17 00:00:00 2001 From: Zac Dover Date: Sat, 11 May 2024 14:57:23 +1000 Subject: [PATCH] doc/cephfs: edit fs-volumes.rst (1 of x) Edit doc/cephfs/fs-volumes to the section "FS Subvolumes" (but not including the section "FS Subvolumes"). Signed-off-by: Zac Dover (cherry picked from commit 0acbb278fc712c32334fd7340b887d2865dbfb86) --- doc/cephfs/fs-volumes.rst | 78 +++++++++++++++++++++------------------ 1 file changed, 43 insertions(+), 35 deletions(-) diff --git a/doc/cephfs/fs-volumes.rst b/doc/cephfs/fs-volumes.rst index 3d17be561e55f..1f4c076008e3f 100644 --- a/doc/cephfs/fs-volumes.rst +++ b/doc/cephfs/fs-volumes.rst @@ -20,11 +20,11 @@ abstractions: subvolumes. Used to effect policies (e.g., :doc:`/cephfs/file-layouts`) across a set of subvolumes -Some possible use-cases for the export abstractions: +Possible use-cases for the export abstractions: * FS subvolumes used as Manila shares or CSI volumes -* FS subvolume groups used as Manila share groups +* FS-subvolume groups used as Manila share groups Requirements ------------ @@ -46,9 +46,9 @@ Create a volume by running the following command: ceph fs volume create [placement] -This creates a CephFS file system and its data and metadata pools. It can also -deploy MDS daemons for the filesystem using a ceph-mgr orchestrator module (for -example Rook). See :doc:`/mgr/orchestrator`. +This creates a CephFS file system and its data and metadata pools. This command +can also deploy MDS daemons for the filesystem using a ceph-mgr orchestrator +module (for example Rook). See :doc:`/mgr/orchestrator`. ```` is the volume name (an arbitrary string). ``[placement]`` is an optional string that specifies the :ref:`orchestrator-cli-placement-spec` for @@ -62,13 +62,13 @@ To remove a volume, run the following command: $ ceph fs volume rm [--yes-i-really-mean-it] -This removes a file system and its data and metadata pools. It also tries to -remove MDS daemons using the enabled ceph-mgr orchestrator module. +This command removes a file system and its data and metadata pools. It also +tries to remove MDS daemons using the enabled ceph-mgr orchestrator module. -.. note:: After volume deletion, it is recommended to restart `ceph-mgr` - if a new file system is created on the same cluster and subvolume interface - is being used. Please see https://tracker.ceph.com/issues/49605#note-5 - for more details. +.. note:: After volume deletion, we recommend restarting `ceph-mgr` if a new + file system is created on the same cluster and the subvolume interface is + being used. See https://tracker.ceph.com/issues/49605#note-5 for more + details. List volumes by running the following command: @@ -80,23 +80,24 @@ Rename a volume by running the following command: Renaming a volume can be an expensive operation that requires the following: -- Renaming the orchestrator-managed MDS service to match the . - This involves launching a MDS service with ```` and bringing - down the MDS service with ````. -- Renaming the file system matching ```` to ````. -- Changing the application tags on the data and metadata pools of the file system - to ````. +- Renaming the orchestrator-managed MDS service to match the + ````. This involves launching a MDS service with + ```` and bringing down the MDS service with ````. +- Renaming the file system from ```` to ````. +- Changing the application tags on the data and metadata pools of the file + system to ````. - Renaming the metadata and data pools of the file system. The CephX IDs that are authorized for ```` must be reauthorized for -````. Any ongoing operations of the clients using these IDs may -be disrupted. Ensure that mirroring is disabled on the volume. +````. Any ongoing operations of the clients that are using these +IDs may be disrupted. Ensure that mirroring is disabled on the volume. To fetch the information of a CephFS volume, run the following command: $ ceph fs volume info vol_name [--human_readable] -The ``--human_readable`` flag shows used and available pool capacities in KB/MB/GB. +The ``--human_readable`` flag shows used and available pool capacities in +KB/MB/GB. The output format is JSON and contains fields as follows: @@ -144,7 +145,7 @@ Create a subvolume group by running the following command: The command succeeds even if the subvolume group already exists. -When creating a subvolume group you can specify its data pool layout (see +When you create a subvolume group, you can specify its data pool layout (see :doc:`/cephfs/file-layouts`), uid, gid, file mode in octal numerals, and size in bytes. The size of the subvolume group is specified by setting a quota on it (see :doc:`/cephfs/quota`). By default, the subvolume group @@ -156,13 +157,15 @@ Remove a subvolume group by running a command of the following form: $ ceph fs subvolumegroup rm [--force] The removal of a subvolume group fails if the subvolume group is not empty or -is non-existent. The ``--force`` flag allows the non-existent "subvolume group remove -command" to succeed. +is non-existent. The ``--force`` flag allows the command to succeed when its +argument is a non-existent subvolume group. Fetch the absolute path of a subvolume group by running a command of the following form: - $ ceph fs subvolumegroup getpath +.. prompt:: bash # + + ceph fs subvolumegroup getpath List subvolume groups by running a command of the following form: @@ -171,7 +174,8 @@ List subvolume groups by running a command of the following form: .. note:: Subvolume group snapshot feature is no longer supported in mainline CephFS (existing group snapshots can still be listed and deleted) -Fetch the metadata of a subvolume group by running a command of the following form: +Fetch the metadata of a subvolume group by running a command of the following +form: .. prompt:: bash # @@ -179,9 +183,12 @@ Fetch the metadata of a subvolume group by running a command of the following fo The output format is JSON and contains fields as follows: -* ``atime``: access time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS" -* ``mtime``: modification time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS" -* ``ctime``: change time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS" +* ``atime``: access time of the subvolume group path in the format ``YYYY-MM-DD + HH:MM:SS`` +* ``mtime``: modification time of the subvolume group path in the format + ``YYYY-MM-DD HH:MM:SS`` +* ``ctime``: change time of the subvolume group path in the format ``YYYY-MM-DD + HH:MM:SS`` * ``uid``: uid of the subvolume group path * ``gid``: gid of the subvolume group path * ``mode``: mode of the subvolume group path @@ -192,8 +199,8 @@ The output format is JSON and contains fields as follows: * ``created_at``: creation time of the subvolume group in the format "YYYY-MM-DD HH:MM:SS" * ``data_pool``: data pool to which the subvolume group belongs -Check the presence of any subvolume group by running a command of the following -form: +Check for the presence of a given subvolume group by running a command of the +following form: .. prompt:: bash $ @@ -205,9 +212,9 @@ The ``exist`` command outputs: * "no subvolumegroup exists": if no subvolumegroup is present .. note:: This command checks for the presence of custom groups and not - presence of the default one. To validate the emptiness of the volume, a - subvolumegroup existence check alone is not sufficient. Subvolume existence - also needs to be checked as there might be subvolumes in the default group. + presence of the default one. A subvolumegroup-existence check alone is not + sufficient to validate the emptiness of the volume. Subvolume existence must + also be checked, as there might be subvolumes in the default group. Resize a subvolume group by running a command of the following form: @@ -215,14 +222,15 @@ Resize a subvolume group by running a command of the following form: ceph fs subvolumegroup resize [--no_shrink] -The command resizes the subvolume group quota, using the size specified by +This command resizes the subvolume group quota, using the size specified by ``new_size``. The ``--no_shrink`` flag prevents the subvolume group from shrinking below the current used size. The subvolume group may be resized to an infinite size by passing ``inf`` or ``infinite`` as the ``new_size``. -Remove a snapshot of a subvolume group by running a command of the following form: +Remove a snapshot of a subvolume group by running a command of the following +form: .. prompt:: bash $