Skip to content

Commit

Permalink
doc/cephfs: edit fs-volumes.rst (1 of x)
Browse files Browse the repository at this point in the history
Edit doc/cephfs/fs-volumes to the section "FS Subvolumes" (but not
including the section "FS Subvolumes").

Signed-off-by: Zac Dover <zac.dover@proton.me>
  • Loading branch information
zdover23 committed May 11, 2024
1 parent 69bd270 commit 0acbb27
Showing 1 changed file with 44 additions and 37 deletions.
81 changes: 44 additions & 37 deletions doc/cephfs/fs-volumes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ abstractions:
subvolumes. Used to effect policies (e.g., :doc:`/cephfs/file-layouts`)
across a set of subvolumes

Some possible use-cases for the export abstractions:
Possible use-cases for the export abstractions:

* FS subvolumes used as Manila shares or CSI volumes

* FS subvolume groups used as Manila share groups
* FS-subvolume groups used as Manila share groups

Requirements
------------
Expand All @@ -46,9 +46,9 @@ Create a volume by running the following command:

ceph fs volume create <vol_name> [placement]

This creates a CephFS file system and its data and metadata pools. It can also
deploy MDS daemons for the filesystem using a ceph-mgr orchestrator module (for
example Rook). See :doc:`/mgr/orchestrator`.
This creates a CephFS file system and its data and metadata pools. This command
can also deploy MDS daemons for the filesystem using a ceph-mgr orchestrator
module (for example Rook). See :doc:`/mgr/orchestrator`.

``<vol_name>`` is the volume name (an arbitrary string). ``[placement]`` is an
optional string that specifies the :ref:`orchestrator-cli-placement-spec` for
Expand All @@ -64,13 +64,13 @@ To remove a volume, run the following command:

ceph fs volume rm <vol_name> [--yes-i-really-mean-it]

This removes a file system and its data and metadata pools. It also tries to
remove MDS daemons using the enabled ceph-mgr orchestrator module.
This command removes a file system and its data and metadata pools. It also
tries to remove MDS daemons using the enabled ceph-mgr orchestrator module.

.. note:: After volume deletion, it is recommended to restart `ceph-mgr`
if a new file system is created on the same cluster and subvolume interface
is being used. Please see https://tracker.ceph.com/issues/49605#note-5
for more details.
.. note:: After volume deletion, we recommend restarting `ceph-mgr` if a new
file system is created on the same cluster and the subvolume interface is
being used. See https://tracker.ceph.com/issues/49605#note-5 for more
details.

List volumes by running the following command:

Expand All @@ -86,25 +86,26 @@ Rename a volume by running the following command:

Renaming a volume can be an expensive operation that requires the following:

- Renaming the orchestrator-managed MDS service to match the <new_vol_name>.
This involves launching a MDS service with ``<new_vol_name>`` and bringing
down the MDS service with ``<vol_name>``.
- Renaming the file system matching ``<vol_name>`` to ``<new_vol_name>``.
- Changing the application tags on the data and metadata pools of the file system
to ``<new_vol_name>``.
- Renaming the orchestrator-managed MDS service to match the
``<new_vol_name>``. This involves launching a MDS service with
``<new_vol_name>`` and bringing down the MDS service with ``<vol_name>``.
- Renaming the file system from ``<vol_name>`` to ``<new_vol_name>``.
- Changing the application tags on the data and metadata pools of the file
system to ``<new_vol_name>``.
- Renaming the metadata and data pools of the file system.

The CephX IDs that are authorized for ``<vol_name>`` must be reauthorized for
``<new_vol_name>``. Any ongoing operations of the clients using these IDs may
be disrupted. Ensure that mirroring is disabled on the volume.
``<new_vol_name>``. Any ongoing operations of the clients that are using these
IDs may be disrupted. Ensure that mirroring is disabled on the volume.

To fetch the information of a CephFS volume, run the following command:

.. prompt:: bash #

ceph fs volume info vol_name [--human_readable]

The ``--human_readable`` flag shows used and available pool capacities in KB/MB/GB.
The ``--human_readable`` flag shows used and available pool capacities in
KB/MB/GB.

The output format is JSON and contains fields as follows:

Expand Down Expand Up @@ -159,7 +160,7 @@ Create a subvolume group by running the following command:

The command succeeds even if the subvolume group already exists.

When creating a subvolume group you can specify its data pool layout (see
When you create a subvolume group, you can specify its data pool layout (see
:doc:`/cephfs/file-layouts`), uid, gid, file mode in octal numerals, and
size in bytes. The size of the subvolume group is specified by setting
a quota on it (see :doc:`/cephfs/quota`). By default, the subvolume group
Expand All @@ -173,11 +174,11 @@ Remove a subvolume group by running a command of the following form:
ceph fs subvolumegroup rm <vol_name> <group_name> [--force]

The removal of a subvolume group fails if the subvolume group is not empty or
is non-existent. The ``--force`` flag allows the non-existent "subvolume group remove
command" to succeed.
is non-existent. The ``--force`` flag allows the command to succeed when its
argument is a non-existent subvolume group.


Fetch the absolute path of a subvolume group by running a command of the following form:
Fetch the absolute path of a subvolume group by running a command of the
following form:

.. prompt:: bash #

Expand All @@ -192,17 +193,21 @@ List subvolume groups by running a command of the following form:
.. note:: Subvolume group snapshot feature is no longer supported in mainline CephFS (existing group
snapshots can still be listed and deleted)

Fetch the metadata of a subvolume group by running a command of the following form:
Fetch the metadata of a subvolume group by running a command of the following
form:

.. prompt:: bash #

ceph fs subvolumegroup info <vol_name> <group_name>

The output format is JSON and contains fields as follows:

* ``atime``: access time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS"
* ``mtime``: modification time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS"
* ``ctime``: change time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS"
* ``atime``: access time of the subvolume group path in the format ``YYYY-MM-DD
HH:MM:SS``
* ``mtime``: modification time of the subvolume group path in the format
``YYYY-MM-DD HH:MM:SS``
* ``ctime``: change time of the subvolume group path in the format ``YYYY-MM-DD
HH:MM:SS``
* ``uid``: uid of the subvolume group path
* ``gid``: gid of the subvolume group path
* ``mode``: mode of the subvolume group path
Expand All @@ -213,7 +218,8 @@ The output format is JSON and contains fields as follows:
* ``created_at``: creation time of the subvolume group in the format "YYYY-MM-DD HH:MM:SS"
* ``data_pool``: data pool to which the subvolume group belongs

Check the presence of any subvolume group by running a command of the following form:
Check for the presence of a given subvolume group by running a command of the
following form:

.. prompt:: bash #

Expand All @@ -225,31 +231,32 @@ The ``exist`` command outputs:
* "no subvolumegroup exists": if no subvolumegroup is present

.. note:: This command checks for the presence of custom groups and not
presence of the default one. To validate the emptiness of the volume, a
subvolumegroup existence check alone is not sufficient. Subvolume existence
also needs to be checked as there might be subvolumes in the default group.
presence of the default one. A subvolumegroup-existence check alone is not
sufficient to validate the emptiness of the volume. Subvolume existence must
also be checked, as there might be subvolumes in the default group.

Resize a subvolume group by running a command of the following form:

.. prompt:: bash #

ceph fs subvolumegroup resize <vol_name> <group_name> <new_size> [--no_shrink]

The command resizes the subvolume group quota, using the size specified by
This command resizes the subvolume group quota, using the size specified by
``new_size``. The ``--no_shrink`` flag prevents the subvolume group from
shrinking below the current used size.

The subvolume group may be resized to an infinite size by passing ``inf`` or
``infinite`` as the ``new_size``.

Remove a snapshot of a subvolume group by running a command of the following form:
Remove a snapshot of a subvolume group by running a command of the following
form:

.. prompt:: bash #

ceph fs subvolumegroup snapshot rm <vol_name> <group_name> <snap_name> [--force]

Supplying the ``--force`` flag allows the command to succeed when it would otherwise
fail due to the nonexistence of the snapshot.
Supplying the ``--force`` flag allows the command to succeed when it would
otherwise fail due to the nonexistence of the snapshot.

List snapshots of a subvolume group by running a command of the following form:

Expand Down

0 comments on commit 0acbb27

Please sign in to comment.