Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reef: doc/cephfs: edit fs-volumes.rst (1 of x) #57418

Merged
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
78 changes: 43 additions & 35 deletions doc/cephfs/fs-volumes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ abstractions:
subvolumes. Used to effect policies (e.g., :doc:`/cephfs/file-layouts`)
across a set of subvolumes

Some possible use-cases for the export abstractions:
Possible use-cases for the export abstractions:

* FS subvolumes used as Manila shares or CSI volumes

* FS subvolume groups used as Manila share groups
* FS-subvolume groups used as Manila share groups

Requirements
------------
Expand All @@ -46,9 +46,9 @@ Create a volume by running the following command:

ceph fs volume create <vol_name> [placement]

This creates a CephFS file system and its data and metadata pools. It can also
deploy MDS daemons for the filesystem using a ceph-mgr orchestrator module (for
example Rook). See :doc:`/mgr/orchestrator`.
This creates a CephFS file system and its data and metadata pools. This command
can also deploy MDS daemons for the filesystem using a ceph-mgr orchestrator
module (for example Rook). See :doc:`/mgr/orchestrator`.

``<vol_name>`` is the volume name (an arbitrary string). ``[placement]`` is an
optional string that specifies the :ref:`orchestrator-cli-placement-spec` for
Expand All @@ -62,13 +62,13 @@ To remove a volume, run the following command:

$ ceph fs volume rm <vol_name> [--yes-i-really-mean-it]

This removes a file system and its data and metadata pools. It also tries to
remove MDS daemons using the enabled ceph-mgr orchestrator module.
This command removes a file system and its data and metadata pools. It also
tries to remove MDS daemons using the enabled ceph-mgr orchestrator module.

.. note:: After volume deletion, it is recommended to restart `ceph-mgr`
if a new file system is created on the same cluster and subvolume interface
is being used. Please see https://tracker.ceph.com/issues/49605#note-5
for more details.
.. note:: After volume deletion, we recommend restarting `ceph-mgr` if a new
file system is created on the same cluster and the subvolume interface is
being used. See https://tracker.ceph.com/issues/49605#note-5 for more
details.

List volumes by running the following command:

Expand All @@ -80,23 +80,24 @@ Rename a volume by running the following command:

Renaming a volume can be an expensive operation that requires the following:

- Renaming the orchestrator-managed MDS service to match the <new_vol_name>.
This involves launching a MDS service with ``<new_vol_name>`` and bringing
down the MDS service with ``<vol_name>``.
- Renaming the file system matching ``<vol_name>`` to ``<new_vol_name>``.
- Changing the application tags on the data and metadata pools of the file system
to ``<new_vol_name>``.
- Renaming the orchestrator-managed MDS service to match the
``<new_vol_name>``. This involves launching a MDS service with
``<new_vol_name>`` and bringing down the MDS service with ``<vol_name>``.
- Renaming the file system from ``<vol_name>`` to ``<new_vol_name>``.
- Changing the application tags on the data and metadata pools of the file
system to ``<new_vol_name>``.
- Renaming the metadata and data pools of the file system.

The CephX IDs that are authorized for ``<vol_name>`` must be reauthorized for
``<new_vol_name>``. Any ongoing operations of the clients using these IDs may
be disrupted. Ensure that mirroring is disabled on the volume.
``<new_vol_name>``. Any ongoing operations of the clients that are using these
IDs may be disrupted. Ensure that mirroring is disabled on the volume.

To fetch the information of a CephFS volume, run the following command:

$ ceph fs volume info vol_name [--human_readable]

The ``--human_readable`` flag shows used and available pool capacities in KB/MB/GB.
The ``--human_readable`` flag shows used and available pool capacities in
KB/MB/GB.

The output format is JSON and contains fields as follows:

Expand Down Expand Up @@ -144,7 +145,7 @@ Create a subvolume group by running the following command:

The command succeeds even if the subvolume group already exists.

When creating a subvolume group you can specify its data pool layout (see
When you create a subvolume group, you can specify its data pool layout (see
:doc:`/cephfs/file-layouts`), uid, gid, file mode in octal numerals, and
size in bytes. The size of the subvolume group is specified by setting
a quota on it (see :doc:`/cephfs/quota`). By default, the subvolume group
Expand All @@ -156,13 +157,15 @@ Remove a subvolume group by running a command of the following form:
$ ceph fs subvolumegroup rm <vol_name> <group_name> [--force]

The removal of a subvolume group fails if the subvolume group is not empty or
is non-existent. The ``--force`` flag allows the non-existent "subvolume group remove
command" to succeed.
is non-existent. The ``--force`` flag allows the command to succeed when its
argument is a non-existent subvolume group.

Fetch the absolute path of a subvolume group by running a command of the
following form:

$ ceph fs subvolumegroup getpath <vol_name> <group_name>
.. prompt:: bash #

ceph fs subvolumegroup getpath <vol_name> <group_name>

List subvolume groups by running a command of the following form:

Expand All @@ -171,17 +174,21 @@ List subvolume groups by running a command of the following form:
.. note:: Subvolume group snapshot feature is no longer supported in mainline
CephFS (existing group snapshots can still be listed and deleted)

Fetch the metadata of a subvolume group by running a command of the following form:
Fetch the metadata of a subvolume group by running a command of the following
form:

.. prompt:: bash #

ceph fs subvolumegroup info <vol_name> <group_name>

The output format is JSON and contains fields as follows:

* ``atime``: access time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS"
* ``mtime``: modification time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS"
* ``ctime``: change time of the subvolume group path in the format "YYYY-MM-DD HH:MM:SS"
* ``atime``: access time of the subvolume group path in the format ``YYYY-MM-DD
HH:MM:SS``
* ``mtime``: modification time of the subvolume group path in the format
``YYYY-MM-DD HH:MM:SS``
* ``ctime``: change time of the subvolume group path in the format ``YYYY-MM-DD
HH:MM:SS``
* ``uid``: uid of the subvolume group path
* ``gid``: gid of the subvolume group path
* ``mode``: mode of the subvolume group path
Expand All @@ -192,8 +199,8 @@ The output format is JSON and contains fields as follows:
* ``created_at``: creation time of the subvolume group in the format "YYYY-MM-DD HH:MM:SS"
* ``data_pool``: data pool to which the subvolume group belongs

Check the presence of any subvolume group by running a command of the following
form:
Check for the presence of a given subvolume group by running a command of the
following form:

.. prompt:: bash $

Expand All @@ -205,24 +212,25 @@ The ``exist`` command outputs:
* "no subvolumegroup exists": if no subvolumegroup is present

.. note:: This command checks for the presence of custom groups and not
presence of the default one. To validate the emptiness of the volume, a
subvolumegroup existence check alone is not sufficient. Subvolume existence
also needs to be checked as there might be subvolumes in the default group.
presence of the default one. A subvolumegroup-existence check alone is not
sufficient to validate the emptiness of the volume. Subvolume existence must
also be checked, as there might be subvolumes in the default group.

Resize a subvolume group by running a command of the following form:

.. prompt:: bash $

ceph fs subvolumegroup resize <vol_name> <group_name> <new_size> [--no_shrink]

The command resizes the subvolume group quota, using the size specified by
This command resizes the subvolume group quota, using the size specified by
``new_size``. The ``--no_shrink`` flag prevents the subvolume group from
shrinking below the current used size.

The subvolume group may be resized to an infinite size by passing ``inf`` or
``infinite`` as the ``new_size``.

Remove a snapshot of a subvolume group by running a command of the following form:
Remove a snapshot of a subvolume group by running a command of the following
form:

.. prompt:: bash $

Expand Down