diff --git a/admin/build-doc b/admin/build-doc index 0caa048767c8a..e62c584019633 100755 --- a/admin/build-doc +++ b/admin/build-doc @@ -111,8 +111,11 @@ for target in $sphinx_targets; do extra_opt="-t man" ;; esac - $vdir/bin/sphinx-build -a -b $builder $extra_opt -d doctrees \ + # Build with -W so that warnings are treated as errors and this fails + $vdir/bin/sphinx-build -W -a -b $builder $extra_opt -d doctrees \ $TOPDIR/doc $TOPDIR/build-doc/output/$target + + done # diff --git a/admin/doc-requirements.txt b/admin/doc-requirements.txt index aba92c28bef9d..3990b2f31bdc9 100644 --- a/admin/doc-requirements.txt +++ b/admin/doc-requirements.txt @@ -1,3 +1,3 @@ -Sphinx == 1.1.3 +Sphinx == 1.6.3 -e git+https://github.com/ceph/sphinx-ditaa.git#egg=sphinx-ditaa -e git+https://github.com/michaeljones/breathe#egg=breathe diff --git a/doc/cephfs/health-messages.rst b/doc/cephfs/health-messages.rst index 54b4f7144e90f..5e9f796787b59 100644 --- a/doc/cephfs/health-messages.rst +++ b/doc/cephfs/health-messages.rst @@ -1,4 +1,6 @@ +.. _cephfs-health-messages: + ====================== CephFS health messages ====================== diff --git a/doc/conf.py b/doc/conf.py index c4d8cb073efe4..ccef4c6dbd7cc 100644 --- a/doc/conf.py +++ b/doc/conf.py @@ -11,16 +11,20 @@ master_doc = 'index' exclude_patterns = ['**/.#*', '**/*~', 'start/quick-common.rst'] if tags.has('man'): - exclude_patterns += ['architecture.rst', 'glossary.rst', 'release*.rst', + master_doc = 'man_index' + exclude_patterns += ['index.rst', 'architecture.rst', 'glossary.rst', 'release*.rst', 'api/*', 'cephfs/*', 'dev/*', 'install/*', 'mon/*', 'rados/*', + 'mgr/*', 'radosgw/*', 'rbd/*', 'start/*'] +else: + exclude_patterns += ['man_index.rst'] pygments_style = 'sphinx' diff --git a/doc/dev/logging.rst b/doc/dev/logging.rst index 9c2a6f3e0ef34..467ea1e89f137 100644 --- a/doc/dev/logging.rst +++ b/doc/dev/logging.rst @@ -74,7 +74,7 @@ internal structures if they are the direct subject of the message, for example in a corruption, but use plain english. Example: instead of "Objecter requests" say "OSD client requests" Example: it is okay to mention internal structure in the context - of "Corrupt session table" (but don't say "Corrupt SessionTable") +of "Corrupt session table" (but don't say "Corrupt SessionTable") Where possible, describe the consequence for system availability, rather than only describing the underlying state. For example, rather than diff --git a/doc/index.rst b/doc/index.rst index 253e2a4f54911..ffd97c69a17db 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -3,7 +3,7 @@ ================= Ceph uniquely delivers **object, block, and file storage in one unified -system**. +system**. .. raw:: html @@ -21,7 +21,7 @@ system**. - Multi-site deployment - Multi-site replication -.. raw:: html +.. raw:: html

Ceph Block Device

@@ -38,10 +38,10 @@ system**. - Incremental backup - Disaster recovery (multisite asynchronous replication) -.. raw:: html +.. raw:: html

Ceph Filesystem

- + - POSIX-compliant semantics - Separates metadata from data - Dynamic rebalancing @@ -55,22 +55,22 @@ system**. .. raw:: html - + See `Ceph Object Store`_ for additional details. .. raw:: html - + See `Ceph Block Device`_ for additional details. - + .. raw:: html - -See `Ceph Filesystem`_ for additional details. - -.. raw:: html + +See `Ceph Filesystem`_ for additional details. + +.. raw:: html @@ -88,7 +88,7 @@ about Ceph, see our `Architecture`_ section. .. _Architecture: architecture .. toctree:: - :maxdepth: 1 + :maxdepth: 3 :hidden: start/intro diff --git a/doc/man_index.rst b/doc/man_index.rst new file mode 100644 index 0000000000000..2330968f98d66 --- /dev/null +++ b/doc/man_index.rst @@ -0,0 +1,43 @@ +.. this is the man index/toctree reference. It is separate from the "regular" +.. index so that it doesn't include things that are not tagged for man pages + +.. toctree:: + :maxdepth: 3 + :hidden: + + man/8/ceph-authtool + man/8/ceph-clsinfo + man/8/ceph-conf + man/8/ceph-create-keys + man/8/ceph-debugpack + man/8/ceph-dencoder + man/8/ceph-deploy + man/8/ceph-detect-init + man/8/ceph-disk + man/8/ceph-fuse + man/8/ceph-mds + man/8/ceph-mon + man/8/ceph-osd + man/8/ceph-post-file + man/8/ceph-rbdnamer + man/8/ceph-rest-api + man/8/ceph-run + man/8/ceph-syn + man/8/ceph + man/8/crushtool + man/8/librados-config + man/8/monmaptool + man/8/mount.ceph + man/8/osdmaptool + man/8/rados + man/8/radosgw-admin + man/8/radosgw + man/8/rbd-fuse + man/8/rbd-ggate + man/8/rbd-mirror + man/8/rbd-nbd + man/8/rbd-replay-many + man/8/rbd-replay-prep + man/8/rbd-replay + man/8/rbd + man/8/rbdmap diff --git a/doc/rados/api/python.rst b/doc/rados/api/python.rst index b4fd7e04dc1ef..d7555686b0be0 100644 --- a/doc/rados/api/python.rst +++ b/doc/rados/api/python.rst @@ -15,14 +15,14 @@ Getting Started You can create your own Ceph client using Python. The following tutorial will show you how to import the Ceph Python module, connect to a Ceph cluster, and -perform object operations as a ``client.admin`` user. +perform object operations as a ``client.admin`` user. -.. note:: To use the Ceph Python bindings, you must have access to a +.. note:: To use the Ceph Python bindings, you must have access to a running Ceph cluster. To set one up quickly, see `Getting Started`_. First, create a Python source file for your Ceph client. :: :linenos: - + sudo vim client.py @@ -54,7 +54,7 @@ of the initial Ceph monitors. :linenos: import rados, sys - + #Create Handle Examples. cluster = rados.Rados(conffile='ceph.conf') cluster = rados.Rados(conffile=sys.argv[1]) @@ -62,13 +62,13 @@ of the initial Ceph monitors. Ensure that the ``conffile`` argument provides the path and file name of your Ceph configuration file. You may use the ``sys`` module to avoid hard-coding the -Ceph configuration path and file name. +Ceph configuration path and file name. Your Python client also requires a client keyring. For this example, we use the ``client.admin`` key by default. If you would like to specify the keyring when creating the cluster handle, you may use the ``conf`` argument. Alternatively, -you may specify the keyring path in your Ceph configuration file. For example, -you may add something like the following line to you Ceph configuration file:: +you may specify the keyring path in your Ceph configuration file. For example, +you may add something like the following line to you Ceph configuration file:: keyring = /path/to/ceph.client.admin.keyring @@ -78,7 +78,7 @@ For additional details on modifying your configuration via Python, see `Configur Connect to the Cluster ---------------------- -Once you have a cluster handle configured, you may connect to the cluster. +Once you have a cluster handle configured, you may connect to the cluster. With a connection to the cluster, you may execute methods that return information about the cluster. @@ -87,11 +87,11 @@ information about the cluster. :emphasize-lines: 7 import rados, sys - + cluster = rados.Rados(conffile='ceph.conf') print "\nlibrados version: " + str(cluster.version()) - print "Will attempt to connect to: " + str(cluster.conf_get('mon initial members')) - + print "Will attempt to connect to: " + str(cluster.conf_get('mon initial members')) + cluster.connect() print "\nCluster ID: " + cluster.get_fsid() @@ -112,7 +112,7 @@ configuration file example uses the ``client.admin`` keyring you generated with .. code-block:: ini :linenos: - + [global] ... keyring=/path/to/keyring/ceph.client.admin.keyring @@ -123,7 +123,7 @@ Manage Pools When connected to the cluster, the ``Rados`` API allows you to manage pools. You can list pools, check for the existence of a pool, create a pool and delete a -pool. +pool. .. code-block:: python :linenos: @@ -174,7 +174,7 @@ to use. Once you have an I/O context, you can read/write objects, extended attributes, and perform a number of other operations. After you complete operations, ensure -that you close the connection. For example: +that you close the connection. For example: .. code-block:: python :linenos: @@ -190,18 +190,18 @@ Once you create an I/O context, you can write objects to the cluster. If you write to an object that doesn't exist, Ceph creates it. If you write to an object that exists, Ceph overwrites it (except when you specify a range, and then it only overwrites the range). You may read objects (and object ranges) -from the cluster. You may also remove objects from the cluster. For example: +from the cluster. You may also remove objects from the cluster. For example: .. code-block:: python :linenos: :emphasize-lines: 2, 5, 8 - + print "\nWriting object 'hw' with contents 'Hello World!' to pool 'data'." ioctx.write_full("hw", "Hello World!") print "\n\nContents of object 'hw'\n------------------------\n" print ioctx.read("hw") - + print "\nRemoving object 'hw'" ioctx.remove_object("hw") @@ -210,7 +210,7 @@ Writing and Reading XATTRS -------------------------- Once you create an object, you can write extended attributes (XATTRs) to -the object and read XATTRs from the object. For example: +the object and read XATTRs from the object. For example: .. code-block:: python :linenos: @@ -226,7 +226,7 @@ the object and read XATTRs from the object. For example: Listing Objects --------------- -If you want to examine the list of objects in a pool, you may +If you want to examine the list of objects in a pool, you may retrieve the list of objects and iterate over them with the object iterator. For example: @@ -236,12 +236,12 @@ For example: object_iterator = ioctx.list_objects() - while True : - - try : + while True : + + try : rados_object = object_iterator.next() print "Object contents = " + rados_object.read() - + except StopIteration : break @@ -260,7 +260,7 @@ Configuration ------------- The ``Rados`` class provides methods for getting and setting configuration -values, reading the Ceph configuration file, and parsing arguments. You +values, reading the Ceph configuration file, and parsing arguments. You do not need to be connected to the Ceph Storage Cluster to invoke the following methods. See `Storage Cluster Configuration`_ for details on settings. @@ -269,7 +269,7 @@ methods. See `Storage Cluster Configuration`_ for details on settings. .. automethod:: Rados.conf_set(option, val) .. automethod:: Rados.conf_read_file(path=None) .. automethod:: Rados.conf_parse_argv(args) -.. automethod:: Rados.version() +.. automethod:: Rados.version() Connection Management @@ -280,12 +280,22 @@ the cluster ``fsid``, retrieve cluster statistics, and disconnect (shutdown) from the cluster. You may also assert that the cluster handle is in a particular state (e.g., "configuring", "connecting", etc.). - .. automethod:: Rados.connect(timeout=0) .. automethod:: Rados.shutdown() .. automethod:: Rados.get_fsid() .. automethod:: Rados.get_cluster_stats() -.. automethod:: Rados.require_state(*args) + +.. documented manually because it raises warnings because of *args usage in the +.. signature + +.. py:class:: Rados + + .. py:method:: require_state(*args) + + Checks if the Rados object is in a special state + + :param args: Any number of states to check as separate arguments + :raises: :class:`RadosStateError` Pool Operations @@ -307,8 +317,8 @@ Input/Output Context API To write data to and read data from the Ceph Object Store, you must create an Input/Output context (ioctx). The `Rados` class provides a `open_ioctx()` -method. The remaining ``ioctx`` operations involve invoking methods of the -`Ioctx` and other classes. +method. The remaining ``ioctx`` operations involve invoking methods of the +`Ioctx` and other classes. .. automethod:: Rados.open_ioctx(ioctx_name) .. automethod:: Ioctx.require_ioctx_open() @@ -322,7 +332,7 @@ method. The remaining ``ioctx`` operations involve invoking methods of the .. -------------- .. The Ceph Storage Cluster allows you to make a snapshot of a pool's state. -.. Whereas, basic pool operations only require a connection to the cluster, +.. Whereas, basic pool operations only require a connection to the cluster, .. snapshots require an I/O context. .. Ioctx.create_snap(self, snap_name) diff --git a/doc/rados/configuration/bluestore-config-ref.rst b/doc/rados/configuration/bluestore-config-ref.rst index 8d8ace653c459..ee98346431854 100644 --- a/doc/rados/configuration/bluestore-config-ref.rst +++ b/doc/rados/configuration/bluestore-config-ref.rst @@ -182,7 +182,7 @@ operation. The modes are: * **force**: Try to compress data no matter what. For more information about the *compressible* and *incompressible* IO -hints, see :doc:`/api/librados/#rados_set_alloc_hint`. +hints, see :c:func:`rados_set_alloc_hint`. Note that regardless of the mode, if the size of the data chunk is not reduced sufficiently it will not be used and the original @@ -216,11 +216,12 @@ set with:: :Description: The default policy for using compression if the per-pool property ``compression_mode`` is not set. ``none`` means never use - compression. ``passive`` means use compression when - `clients hint`_ that data is compressible. ``aggressive`` means - use compression unless clients hint that data is not compressible. - ``force`` means use compression under all circumstances even if - the clients hint that the data is not compressible. + compression. ``passive`` means use compression when + :c:func:`clients hint ` that data is + compressible. ``aggressive`` means use compression unless + clients hint that data is not compressible. ``force`` means use + compression under all circumstances even if the clients hint that + the data is not compressible. :Type: String :Required: No :Valid Settings: ``none``, ``passive``, ``aggressive``, ``force`` @@ -293,5 +294,3 @@ set with:: :Type: Unsigned Integer :Required: No :Default: 64K - -.. _clients hint: ../../api/librados/#rados_set_alloc_hint diff --git a/doc/rados/operations/crush-map-edits.rst b/doc/rados/operations/crush-map-edits.rst index 52222703823c9..bbebd21f18169 100644 --- a/doc/rados/operations/crush-map-edits.rst +++ b/doc/rados/operations/crush-map-edits.rst @@ -652,3 +652,5 @@ Again, the special ``--enable-unsafe-tunables`` option is required. Further, as noted above, be careful running old versions of the ``ceph-osd`` daemon after reverting to legacy values as the feature bit is not perfectly enforced. + +.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf diff --git a/doc/rados/operations/crush-map.rst b/doc/rados/operations/crush-map.rst index 05fa4ff691aef..2a8f609d8c561 100644 --- a/doc/rados/operations/crush-map.rst +++ b/doc/rados/operations/crush-map.rst @@ -9,9 +9,9 @@ through a centralized server or broker. With an algorithmically determined method of storing and retrieving data, Ceph avoids a single point of failure, a performance bottleneck, and a physical limit to its scalability. -CRUSH requires a map of your cluster, and uses the CRUSH map to pseudo-randomly -store and retrieve data in OSDs with a uniform distribution of data across the -cluster. For a detailed discussion of CRUSH, see +CRUSH requires a map of your cluster, and uses the CRUSH map to pseudo-randomly +store and retrieve data in OSDs with a uniform distribution of data across the +cluster. For a detailed discussion of CRUSH, see `CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data`_ CRUSH maps contain a list of :abbr:`OSDs (Object Storage Devices)`, a list of @@ -51,8 +51,8 @@ Note: #. Note that the order of the keys does not matter. #. The key name (left of ``=``) must be a valid CRUSH ``type``. By default - these include root, datacenter, room, row, pod, pdu, rack, chassis and host, - but those types can be customized to be anything appropriate by modifying + these include root, datacenter, room, row, pod, pdu, rack, chassis and host, + but those types can be customized to be anything appropriate by modifying the CRUSH map. #. Not all keys need to be specified. For example, by default, Ceph automatically sets a ``ceph-osd`` daemon's location to be @@ -151,12 +151,12 @@ leaves, interior nodes with non-device types, and a root node of type | {o}root default | +--------+--------+ | - +---------------+---------------+ + +---------------+---------------+ | | +-------+-------+ +-----+-------+ | {o}host foo | | {o}host bar | +-------+-------+ +-----+-------+ - | | + | | +-------+-------+ +-------+-------+ | | | | +-----+-----+ +-----+-----+ +-----+-----+ +-----+-----+ @@ -197,7 +197,7 @@ specifying the *pool type* they will be used for (replicated or erasure coded), the *failure domain*, and optionally a *device class*. In rare cases rules must be written by hand by manually editing the CRUSH map. - + You can see what rules are defined for your cluster with:: ceph osd crush rule ls @@ -313,7 +313,7 @@ Where: ``name`` -:Description: The full name of the OSD. +:Description: The full name of the OSD. :Type: String :Required: Yes :Example: ``osd.0`` @@ -337,7 +337,7 @@ Where: ``bucket-type`` -:Description: You may specify the OSD's location in the CRUSH hierarchy. +:Description: You may specify the OSD's location in the CRUSH hierarchy. :Type: Key/value pairs. :Required: No :Example: ``datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1`` @@ -365,7 +365,7 @@ Where: ``name`` -:Description: The full name of the OSD. +:Description: The full name of the OSD. :Type: String :Required: Yes :Example: ``osd.0`` @@ -373,7 +373,7 @@ Where: ``weight`` -:Description: The CRUSH weight for the OSD. +:Description: The CRUSH weight for the OSD. :Type: Double :Required: Yes :Example: ``2.0`` @@ -396,7 +396,7 @@ Where: ``name`` -:Description: The full name of the OSD. +:Description: The full name of the OSD. :Type: String :Required: Yes :Example: ``osd.0`` @@ -458,7 +458,7 @@ Where: ``bucket-type`` -:Description: You may specify the bucket's location in the CRUSH hierarchy. +:Description: You may specify the bucket's location in the CRUSH hierarchy. :Type: Key/value pairs. :Required: No :Example: ``datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1`` @@ -658,6 +658,8 @@ Rules that are not in use by pools can be deleted with:: ceph osd crush rule rm {rule-name} +.. _crush-map-tunables: + Tunables ======== @@ -739,7 +741,7 @@ The new tunable is: CRUSH is sometimes unable to find a mapping. The optimal value (in terms of computational cost and correctness) is 1. -Migration impact: +Migration impact: * For existing clusters that have lots of existing data, changing from 0 to 1 will cause a lot of data to move; a value of 4 or 5 diff --git a/doc/rados/operations/erasure-code-profile.rst b/doc/rados/operations/erasure-code-profile.rst index ddf772d36ca73..1f4ba5c4b4fcf 100644 --- a/doc/rados/operations/erasure-code-profile.rst +++ b/doc/rados/operations/erasure-code-profile.rst @@ -1,3 +1,5 @@ +.. _erasure-code-profiles: + ===================== Erasure code profiles ===================== diff --git a/doc/rados/operations/health-checks.rst b/doc/rados/operations/health-checks.rst index 6164355798680..34b12aff2fab9 100644 --- a/doc/rados/operations/health-checks.rst +++ b/doc/rados/operations/health-checks.rst @@ -15,7 +15,7 @@ health checks, and present them in a way that reflects their meaning. This page lists the health checks that are raised by the monitor and manager daemons. In addition to these, you may also see health checks that originate -from MDS daemons (see :doc:`/cephfs/health-messages`), and health checks +from MDS daemons (see :ref:`cephfs-health-messages`), and health checks that are defined by ceph-mgr python modules. Definitions @@ -90,7 +90,7 @@ threshold by a small amount:: New storage should be added to the cluster by deploying more OSDs or existing data should be deleted in order to free up space. - + OSD_BACKFILLFULL ________________ @@ -136,7 +136,7 @@ With the exception of *full*, these flags can be set or cleared with:: ceph osd set ceph osd unset - + OSD_FLAGS _________ @@ -165,7 +165,7 @@ The CRUSH map is using very old settings and should be updated. The oldest tunables that can be used (i.e., the oldest client version that can connect to the cluster) without triggering this health warning is determined by the ``mon_crush_min_required_version`` config option. -See :doc:`/rados/operations/crush-map/#tunables` for more information. +See :ref:`crush-map-tunables` for more information. OLD_CRUSH_STRAW_CALC_VERSION ____________________________ @@ -175,7 +175,7 @@ intermediate weight values for ``straw`` buckets. The CRUSH map should be updated to use the newer method (``straw_calc_version=1``). See -:doc:`/rados/operations/crush-map/#tunables` for more information. +:ref:`crush-map-tunables` for more information. CACHE_POOL_NO_HIT_SET _____________________ @@ -189,7 +189,7 @@ Hit sets can be configured on the cache pool with:: ceph osd pool set hit_set_type ceph osd pool set hit_set_period ceph osd pool set hit_set_count - ceph osd pool set hit_set_fpp + ceph osd pool set hit_set_fpp OSD_NO_SORTBITWISE __________________ @@ -327,10 +327,9 @@ the cluster, and similar reduce overall performance. This may be an expected condition if data pools have not yet been created. -The PG count for existing pools can be increased or new pools can be -created. Please refer to -:doc:`placement-groups#Choosing-the-number-of-Placement-Groups` for -more information. +The PG count for existing pools can be increased or new pools can be created. +Please refer to :ref:`choosing-number-of-placement-groups` for more +information. TOO_MANY_PGS ____________ @@ -348,9 +347,8 @@ described above. The ``pgp_num`` value can be adjusted with:: ceph osd pool set pgp_num -Please refer to -:doc:`placement-groups#Choosing-the-number-of-Placement-Groups` for -more information. +Please refer to :ref:`choosing-number-of-placement-groups` for more +information. SMALLER_PGP_NUM _______________ @@ -401,7 +399,7 @@ via the low-level command:: ceph osd pool application enable foo -For more information, see :doc:`pools.rst#associate-pool-to-application`. +For more information, see :ref:`associate-pool-to-application`. POOL_FULL _________ @@ -415,7 +413,7 @@ Pool quotas can be adjusted up or down (or removed) with:: ceph osd pool set-quota max_bytes ceph osd pool set-quota max_objects -Setting the quota value to 0 will disable the quota. +Setting the quota value to 0 will disable the quota. POOL_NEAR_FULL ______________ @@ -460,8 +458,8 @@ peering state for the PG(s) responsible for the unfound object:: ceph tell query If the latest copy of the object is not available, the cluster can be -told to roll back to a previous version of the object. See -:doc:`troubleshooting-pg#Unfound-objects` for more information. +told to roll back to a previous version of the object. See +:ref:`failures-osd-unfound` for more information. REQUEST_SLOW ____________ diff --git a/doc/rados/operations/index.rst b/doc/rados/operations/index.rst index 253fc2d9d0a56..4172376a101ba 100644 --- a/doc/rados/operations/index.rst +++ b/doc/rados/operations/index.rst @@ -11,15 +11,16 @@ restarting a cluster with the ``ceph`` service; checking the cluster's health; and, monitoring an operating cluster. .. toctree:: - :maxdepth: 1 - + :maxdepth: 1 + operating health-checks monitoring monitoring-osd-pg user-management + pg-repair -.. raw:: html +.. raw:: html

Data Placement

@@ -48,7 +49,7 @@ CRUSH algorithm. Low-level cluster operations consist of starting, stopping, and restarting a particular daemon within a cluster; changing the settings of a particular -daemon or subsystem; and, adding a daemon to the cluster or removing a daemon +daemon or subsystem; and, adding a daemon to the cluster or removing a daemon from the cluster. The most common use cases for low-level operations include growing or shrinking the Ceph cluster and replacing legacy or failed hardware with new hardware. @@ -61,7 +62,7 @@ with new hardware. bluestore-migration Command Reference - + .. raw:: html @@ -72,7 +73,7 @@ you to evaluate your Ceph configuration and modify your logging and debugging settings to identify and remedy issues you are encountering with your cluster. .. toctree:: - :maxdepth: 1 + :maxdepth: 1 ../troubleshooting/community ../troubleshooting/troubleshooting-mon diff --git a/doc/rados/operations/placement-groups.rst b/doc/rados/operations/placement-groups.rst index fee833ad0c6c3..b0c6e329d629d 100644 --- a/doc/rados/operations/placement-groups.rst +++ b/doc/rados/operations/placement-groups.rst @@ -23,7 +23,7 @@ calculated automatically. Here are a few values commonly used: - If you have more than 50 OSDs, you need to understand the tradeoffs and how to calculate the ``pg_num`` value by yourself -- For calculating ``pg_num`` value by yourself please take help of `pgcalc`_ tool +- For calculating ``pg_num`` value by yourself please take help of `pgcalc`_ tool As the number of OSDs increases, chosing the right value for pg_num becomes more important because it has a significant influence on the @@ -191,7 +191,7 @@ will degrade ~4 (i.e. ~75 / 19 placement groups being recovered) instead of ~17 and the third OSD lost will only lose data if it is one of the four OSDs containing the surviving copy. In other words, if the probability of losing one OSD is 0.0001% during the recovery time -frame, it goes from 17 * 10 * 0.0001% in the cluster with 10 OSDs to 4 * 20 * +frame, it goes from 17 * 10 * 0.0001% in the cluster with 10 OSDs to 4 * 20 * 0.0001% in the cluster with 20 OSDs. In a nutshell, more OSDs mean faster recovery and a lower risk of @@ -250,6 +250,8 @@ they exist. Minimizing the number of placement groups saves significant amounts of resources. +.. _choosing-number-of-placement-groups: + Choosing the number of Placement Groups ======================================= @@ -412,7 +414,7 @@ than others (for example, those PGs may hold data for images used by running machines and other PGs may be used by inactive machines/less relevant data). In that case, you may want to prioritize recovery of those groups so performance and/or availability of data stored on those groups is restored -earlier. To do this (mark particular placement group(s) as prioritized during +earlier. To do this (mark particular placement group(s) as prioritized during backfill or recovery), execute the following:: ceph pg force-recovery {pg-id} [{pg-id #2}] [{pg-id #3} ...] diff --git a/doc/rados/operations/pools.rst b/doc/rados/operations/pools.rst index 70155937cad0d..10169ff62d1b7 100644 --- a/doc/rados/operations/pools.rst +++ b/doc/rados/operations/pools.rst @@ -6,33 +6,33 @@ When you first deploy a cluster without creating a pool, Ceph uses the default pools for storing data. A pool provides you with: - **Resilience**: You can set how many OSD are allowed to fail without losing data. - For replicated pools, it is the desired number of copies/replicas of an object. + For replicated pools, it is the desired number of copies/replicas of an object. A typical configuration stores an object and one additional copy (i.e., ``size = 2``), but you can determine the number of copies/replicas. For `erasure coded pools <../erasure-code>`_, it is the number of coding chunks (i.e. ``m=2`` in the **erasure code profile**) - + - **Placement Groups**: You can set the number of placement groups for the pool. - A typical configuration uses approximately 100 placement groups per OSD to - provide optimal balancing without using up too many computing resources. When + A typical configuration uses approximately 100 placement groups per OSD to + provide optimal balancing without using up too many computing resources. When setting up multiple pools, be careful to ensure you set a reasonable number of - placement groups for both the pool and the cluster as a whole. + placement groups for both the pool and the cluster as a whole. -- **CRUSH Rules**: When you store data in a pool, a CRUSH ruleset mapped to the - pool enables CRUSH to identify a rule for the placement of the object - and its replicas (or chunks for erasure coded pools) in your cluster. +- **CRUSH Rules**: When you store data in a pool, a CRUSH ruleset mapped to the + pool enables CRUSH to identify a rule for the placement of the object + and its replicas (or chunks for erasure coded pools) in your cluster. You can create a custom CRUSH rule for your pool. - -- **Snapshots**: When you create snapshots with ``ceph osd pool mksnap``, + +- **Snapshots**: When you create snapshots with ``ceph osd pool mksnap``, you effectively take a snapshot of a particular pool. - -To organize data into pools, you can list, create, and remove pools. + +To organize data into pools, you can list, create, and remove pools. You can also view the utilization statistics for each pool. List Pools ========== -To list your cluster's pools, execute:: +To list your cluster's pools, execute:: ceph osd lspools @@ -53,19 +53,19 @@ For details on placement group numbers refer to `setting the number of placement application using the pool. See `Associate Pool to Application`_ below for more information. -For example:: +For example:: osd pool default pg num = 100 osd pool default pgp num = 100 -To create a pool, execute:: +To create a pool, execute:: ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \ [crush-rule-name] [expected-num-objects] ceph osd pool create {pool-name} {pg-num} {pgp-num} erasure \ [erasure-code-profile] [crush-rule-name] [expected_num_objects] -Where: +Where: ``{pool-name}`` @@ -76,7 +76,7 @@ Where: ``{pg-num}`` :Description: The total number of placement groups for the pool. See `Placement - Groups`_ for details on calculating a suitable number. The + Groups`_ for details on calculating a suitable number. The default value ``8`` is NOT suitable for most systems. :Type: Integer @@ -86,7 +86,7 @@ Where: ``{pgp-num}`` :Description: The total number of placement groups for placement purposes. This - **should be equal to the total number of placement groups**, except + **should be equal to the total number of placement groups**, except for placement group splitting scenarios. :Type: Integer @@ -105,7 +105,7 @@ Where: implement a subset of the available operations. :Type: String -:Required: No. +:Required: No. :Default: replicated ``[crush-rule-name]`` @@ -114,7 +114,7 @@ Where: rule must exist. :Type: String -:Required: No. +:Required: No. :Default: For **replicated** pools it is the ruleset specified by the ``osd pool default crush replicated ruleset`` config variable. This ruleset must exist. @@ -128,11 +128,11 @@ Where: .. _erasure code profile: ../erasure-code-profile :Description: For **erasure** pools only. Use the `erasure code profile`_. It - must be an existing profile as defined by + must be an existing profile as defined by **osd erasure-code-profile set**. :Type: String -:Required: No. +:Required: No. When you create a pool, set the number of placement groups to a reasonable value (e.g., ``100``). Consider the total number of placement groups per OSD too. @@ -155,7 +155,9 @@ placement groups for your pool. :Type: Integer :Required: No. -:Default: 0, no splitting at the pool creation time. +:Default: 0, no splitting at the pool creation time. + +.. _associate-pool-to-application: Associate Pool to Application ============================= @@ -177,12 +179,12 @@ a pool.:: Set Pool Quotas =============== -You can set pool quotas for the maximum number of bytes and/or the maximum +You can set pool quotas for the maximum number of bytes and/or the maximum number of objects per pool. :: - ceph osd pool set-quota {pool-name} [max_objects {obj-count}] [max_bytes {bytes}] + ceph osd pool set-quota {pool-name} [max_objects {obj-count}] [max_bytes {bytes}] -For example:: +For example:: ceph osd pool set-quota data max_objects 10000 @@ -203,7 +205,7 @@ configuration. Otherwise they will refuse to remove a pool. See `Monitor Configuration`_ for more information. .. _Monitor Configuration: ../../configuration/mon-config-ref - + If you created your own rulesets and rules for a pool you created, you should consider removing them when you no longer need your pool:: @@ -226,42 +228,42 @@ exists, you should consider deleting those users too:: Rename a Pool ============= -To rename a pool, execute:: +To rename a pool, execute:: ceph osd pool rename {current-pool-name} {new-pool-name} -If you rename a pool and you have per-pool capabilities for an authenticated +If you rename a pool and you have per-pool capabilities for an authenticated user, you must update the user's capabilities (i.e., caps) with the new pool -name. +name. .. note:: Version ``0.48`` Argonaut and above. Show Pool Statistics ==================== -To show a pool's utilization statistics, execute:: +To show a pool's utilization statistics, execute:: rados df - + Make a Snapshot of a Pool ========================= -To make a snapshot of a pool, execute:: +To make a snapshot of a pool, execute:: + + ceph osd pool mksnap {pool-name} {snap-name} - ceph osd pool mksnap {pool-name} {snap-name} - .. note:: Version ``0.48`` Argonaut and above. Remove a Snapshot of a Pool =========================== -To remove a snapshot of a pool, execute:: +To remove a snapshot of a pool, execute:: ceph osd pool rmsnap {pool-name} {snap-name} -.. note:: Version ``0.48`` Argonaut and above. +.. note:: Version ``0.48`` Argonaut and above. .. _setpoolvalues: @@ -269,15 +271,16 @@ To remove a snapshot of a pool, execute:: Set Pool Values =============== -To set a value to a pool, execute the following:: +To set a value to a pool, execute the following:: ceph osd pool set {pool-name} {key} {value} - -You may set values for the following keys: + +You may set values for the following keys: .. _compression_algorithm: ``compression_algorithm`` + :Description: Sets inline compression algorithm to use for underlying BlueStore. This setting overrides the `global setting `_ of ``bluestore compression algorithm``. @@ -310,8 +313,8 @@ You may set values for the following keys: ``size`` -:Description: Sets the number of replicas for objects in the pool. - See `Set the Number of Object Replicas`_ for further details. +:Description: Sets the number of replicas for objects in the pool. + See `Set the Number of Object Replicas`_ for further details. Replicated pools only. :Type: Integer @@ -320,8 +323,8 @@ You may set values for the following keys: ``min_size`` -:Description: Sets the minimum number of replicas required for I/O. - See `Set the Number of Object Replicas`_ for further details. +:Description: Sets the minimum number of replicas required for I/O. + See `Set the Number of Object Replicas`_ for further details. Replicated pools only. :Type: Integer @@ -331,7 +334,7 @@ You may set values for the following keys: ``pg_num`` -:Description: The effective number of placement groups to use when calculating +:Description: The effective number of placement groups to use when calculating data placement. :Type: Integer :Valid Range: Superior to ``pg_num`` current value. @@ -340,7 +343,7 @@ You may set values for the following keys: ``pgp_num`` -:Description: The effective number of placement groups for placement to use +:Description: The effective number of placement groups for placement to use when calculating data placement. :Type: Integer @@ -370,7 +373,7 @@ You may set values for the following keys: :Description: Set/Unset HASHPSPOOL flag on a given pool. :Type: Integer :Valid Range: 1 sets flag, 0 unsets flag -:Version: Version ``0.48`` Argonaut and above. +:Version: Version ``0.48`` Argonaut and above. .. _nodelete: @@ -438,7 +441,7 @@ You may set values for the following keys: ``hit_set_count`` -:Description: The number of hit sets to store for cache pools. The higher +:Description: The number of hit sets to store for cache pools. The higher the number, the more RAM consumed by the ``ceph-osd`` daemon. :Type: Integer @@ -448,8 +451,8 @@ You may set values for the following keys: ``hit_set_period`` -:Description: The duration of a hit set period in seconds for cache pools. - The higher the number, the more RAM consumed by the +:Description: The duration of a hit set period in seconds for cache pools. + The higher the number, the more RAM consumed by the ``ceph-osd`` daemon. :Type: Integer @@ -470,10 +473,10 @@ You may set values for the following keys: ``cache_target_dirty_ratio`` -:Description: The percentage of the cache pool containing modified (dirty) +:Description: The percentage of the cache pool containing modified (dirty) objects before the cache tiering agent will flush them to the backing storage pool. - + :Type: Double :Default: ``.4`` @@ -495,7 +498,7 @@ You may set values for the following keys: :Description: The percentage of the cache pool containing unmodified (clean) objects before the cache tiering agent will evict them from the cache pool. - + :Type: Double :Default: ``.8`` @@ -503,17 +506,17 @@ You may set values for the following keys: ``target_max_bytes`` -:Description: Ceph will begin flushing or evicting objects when the +:Description: Ceph will begin flushing or evicting objects when the ``max_bytes`` threshold is triggered. - + :Type: Integer :Example: ``1000000000000`` #1-TB .. _target_max_objects: -``target_max_objects`` +``target_max_objects`` -:Description: Ceph will begin flushing or evicting objects when the +:Description: Ceph will begin flushing or evicting objects when the ``max_objects`` threshold is triggered. :Type: Integer @@ -540,11 +543,11 @@ You may set values for the following keys: ``cache_min_flush_age`` -:Description: The time (in seconds) before the cache tiering agent will flush +:Description: The time (in seconds) before the cache tiering agent will flush an object from the cache pool to the storage pool. - + :Type: Integer -:Example: ``600`` 10min +:Example: ``600`` 10min .. _cache_min_evict_age: @@ -552,7 +555,7 @@ You may set values for the following keys: :Description: The time (in seconds) before the cache tiering agent will evict an object from the cache pool. - + :Type: Integer :Example: ``1800`` 30min @@ -607,11 +610,11 @@ You may set values for the following keys: Get Pool Values =============== -To get a value from a pool, execute the following:: +To get a value from a pool, execute the following:: ceph osd pool get {pool-name} {key} - -You may get values for the following keys: + +You may get values for the following keys: ``size`` @@ -691,18 +694,18 @@ You may get values for the following keys: ``cache_target_full_ratio`` :Description: see cache_target_full_ratio_ - + :Type: Double ``target_max_bytes`` :Description: see target_max_bytes_ - + :Type: Integer -``target_max_objects`` +``target_max_objects`` :Description: see target_max_objects_ @@ -712,14 +715,14 @@ You may get values for the following keys: ``cache_min_flush_age`` :Description: see cache_min_flush_age_ - + :Type: Integer ``cache_min_evict_age`` :Description: see cache_min_evict_age_ - + :Type: Integer @@ -754,19 +757,19 @@ You may get values for the following keys: Set the Number of Object Replicas ================================= -To set the number of object replicas on a replicated pool, execute the following:: +To set the number of object replicas on a replicated pool, execute the following:: ceph osd pool set {poolname} size {num-replicas} .. important:: The ``{num-replicas}`` includes the object itself. - If you want the object and two copies of the object for a total of + If you want the object and two copies of the object for a total of three instances of the object, specify ``3``. - -For example:: + +For example:: ceph osd pool set data size 3 -You may execute this command for each pool. **Note:** An object might accept +You may execute this command for each pool. **Note:** An object might accept I/Os in degraded mode with fewer than ``pool size`` replicas. To set a minimum number of required replicas for I/O, you should use the ``min_size`` setting. For example:: @@ -780,12 +783,12 @@ This ensures that no object in the data pool will receive I/O with fewer than Get the Number of Object Replicas ================================= -To get the number of object replicas, execute the following:: +To get the number of object replicas, execute the following:: ceph osd dump | grep 'replicated size' - + Ceph will list the pools, with the ``replicated size`` attribute highlighted. -By default, ceph creates two replicas of an object (a total of three copies, or +By default, ceph creates two replicas of an object (a total of three copies, or a size of 3). diff --git a/doc/radosgw/adminops.rst b/doc/radosgw/adminops.rst index 422dd16527a44..281eb00b441f2 100644 --- a/doc/radosgw/adminops.rst +++ b/doc/radosgw/adminops.rst @@ -347,7 +347,9 @@ Request Parameters :Type: String :Example: ``foo_user`` :Required: Yes -A tenant name may also specified as a part of ``uid``, by following the syntax ``tenant$user``, refer to `Multitenancy`_ for more details. + +A tenant name may also specified as a part of ``uid``, by following the syntax +``tenant$user``, refer to `Multitenancy`_ for more details. ``display-name`` @@ -415,6 +417,7 @@ A tenant name may also specified as a part of ``uid``, by following the syntax ` :Required: No .. versionadded:: Jewel + ``tenant`` :Description: the Tenant under which a user is a part of. @@ -764,7 +767,7 @@ Create Subuser ============== Create a new subuser (primarily useful for clients using the Swift API). -Note that in general for a subuser to be useful, it must be granted +Note that in general for a subuser to be useful, it must be granted permissions by specifying ``access``. As with user creation if ``subuser`` is specified without ``secret``, then a secret key will be automatically generated. @@ -1856,10 +1859,10 @@ Valid parameters for quotas include: - **Maximum Objects:** The ``max-objects`` setting allows you to specify the maximum number of objects. A negative value disables this setting. - + - **Maximum Size:** The ``max-size`` option allows you to specify a quota for the maximum number of bytes. A negative value disables this setting. - + - **Quota Type:** The ``quota-type`` option sets the scope for the quota. The options are ``bucket`` and ``user``. @@ -1869,7 +1872,7 @@ Valid parameters for quotas include: Get User Quota ~~~~~~~~~~~~~~ -To get a quota, the user must have ``users`` capability set with ``read`` +To get a quota, the user must have ``users`` capability set with ``read`` permission. :: GET /admin/user?quota&uid="a-type=user @@ -1878,7 +1881,7 @@ permission. :: Set User Quota ~~~~~~~~~~~~~~ -To set a quota, the user must have ``users`` capability set with ``write`` +To set a quota, the user must have ``users`` capability set with ``write`` permission. :: PUT /admin/user?quota&uid="a-type=user @@ -1891,7 +1894,7 @@ as encoded in the corresponding read operation. Get Bucket Quota ~~~~~~~~~~~~~~~~ -To get a quota, the user must have ``users`` capability set with ``read`` +To get a quota, the user must have ``users`` capability set with ``read`` permission. :: GET /admin/user?quota&uid="a-type=bucket @@ -1900,7 +1903,7 @@ permission. :: Set Bucket Quota ~~~~~~~~~~~~~~~~ -To set a quota, the user must have ``users`` capability set with ``write`` +To set a quota, the user must have ``users`` capability set with ``write`` permission. :: PUT /admin/user?quota&uid="a-type=bucket diff --git a/doc/release-notes.rst b/doc/release-notes.rst index aa42079302a4a..a904ba58a0cd0 100644 --- a/doc/release-notes.rst +++ b/doc/release-notes.rst @@ -256,8 +256,8 @@ Major Changes from Kraken - ``ceph osd crush {set,rm}-device-class`` manage the new CRUSH *device class* feature. Note that manually creating or deleting a device class name is generally not necessary as it will be smart - enough to be self-managed. ``ceph osd crush class ls`` and - ``ceph osd crush class ls-osd`` will output all existing device classes + enough to be self-managed. ``ceph osd crush class ls`` and + ``ceph osd crush class ls-osd`` will output all existing device classes and a list of OSD ids under the given device class respectively. - ``ceph osd crush rule create-replicated`` replaces the old ``ceph osd crush rule create-simple`` command to create a CRUSH @@ -440,8 +440,7 @@ Upgrade compatibility notes, Kraken to Luminous * The configuration option ``osd pool erasure code stripe width`` has been replaced by ``osd pool erasure code stripe unit``, and given the ability to be overridden by the erasure code profile setting - ``stripe_unit``. For more details see - :doc:`/rados/operations/erasure-code/#erasure-code-profiles`. + ``stripe_unit``. For more details see :ref:`erasure-code-profiles`. * rbd and cephfs can use erasure coding with bluestore. This may be enabled by setting ``allow_ec_overwrites`` to ``true`` for a pool. Since @@ -704,7 +703,7 @@ Notable Changes v12.1.3 Luminous (RC) -==================== +===================== This is the fourth release candidate for Luminous, the next long term stable release. @@ -1030,7 +1029,7 @@ stable release. * rgw: raise debug level of RGWPostObj_ObjStore_S3::get_policy (`pr#16203 `_, Shasha Lu) * rgw: req xml params size limitation error msg (`pr#16310 `_, Enming Zhang) * rgw: restore admin socket path in mrgw.sh (`pr#16540 `_, Casey Bodley) -* rgw: rgw_file: properly & |'d flags (`issue#20663 `_, `pr#16448 `_, Matt Benjamin) +* rgw: rgw_file: properly & \|'d flags (`issue#20663 `_, `pr#16448 `_, Matt Benjamin) * rgw: rgw multisite: feature of bucket sync enable/disable (`pr#15801 `_, Zhang Shaowen, Casey Bodley, Zengran Zhang) * rgw: should unlock when reshard_log->update() reture non-zero in RGWB… (`pr#16502 `_, Wei Qiaomiao) * rgw: test,rgw: fix rgw placement rule pool config option (`pr#16380 `_, Jiaying Ren) @@ -1119,7 +1118,7 @@ Other Notable Changes * cephfs: Remove "experimental" warnings from multimds (`pr#15154 `_, John Spray, "Yan, Zheng") * cleanup: test,mon,msg: kill clang analyzer warnings (`pr#16320 `_, Kefu Chai) * cmake: fix the build with -DWITH_ZFS=ON (`pr#15907 `_, Kefu Chai) -* cmake: Rewrite HAVE_BABELTRACE option to WITH_ (`pr#15305 `_, Willem Jan Withagen) +* cmake: Rewrite HAVE_BABELTRACE option to WITH (`pr#15305 `_, Willem Jan Withagen) * common: auth/RotatingKeyRing: use std::move() to set secrets (`pr#15866 `_, Kefu Chai) * common: ceph.in, mgr: misc cleanups (`pr#16229 `_, liuchang0812) * common: common,config: OPT_FLOAT and OPT_DOUBLE output format in config show (`issue#20104 `_, `pr#15647 `_, Yanhu Cao) @@ -1237,7 +1236,7 @@ Other Notable Changes * mon: Division by zero in PGMapDigest::dump_pool_stats_full() (`pr#15901 `_, Jos Collin) * mon: do crushtool test with fork and timeout, but w/o exec of crushtool (`issue#19964 `_, `pr#16025 `_, Sage Weil) * mon: Filter `log last` output by severity and channel (`pr#15924 `_, John Spray) -* mon: fix hang on deprecated/removed 'pg set_\*full_ratio' commands (`issue#20600 `_, `pr#16300 `_, Sage Weil) +* mon: fix hang on deprecated/removed 'pg set_*full_ratio' commands (`issue#20600 `_, `pr#16300 `_, Sage Weil) * mon: fix kvstore type in mon compact command (`pr#15954 `_, liuchang0812) * mon: Fix status output warning for mon_warn_osd_usage_min_max_delta (`issue#20544 `_, `pr#16220 `_, David Zafman) * mon: handle cases where store->get() may return error (`issue#19601 `_, `pr#14678 `_, Jos Collin) @@ -3938,7 +3937,7 @@ of BlueStore include: The BlueStore on-disk format is expected to continue to evolve. However, we will provide support in the OSD to migrate to the new format on upgrade. - + .. note: BlueStore is still marked "experimental" in Kraken. We recommend its use for proof-of-concept and test environments, or other cases where data loss can be tolerated. Although it is @@ -6768,7 +6767,7 @@ Major Changes from Infernalis and similar projects. * There is now experimental support for multiple CephFS file systems within a single cluster. - + - *RGW*: * The multisite feature has been almost completely rearchitected and @@ -6792,7 +6791,7 @@ Major Changes from Infernalis and a new rbd-mirror daemon that performs the cross-cluster replication. * The exclusive-lock, object-map, fast-diff, and journaling features - can be enabled or disabled dynamically. The deep-flatten features + can be enabled or disabled dynamically. The deep-flatten features can be disabled dynamically but not re-enabled. * The RBD CLI has been rewritten to provide command-specific help and full bash completion support. @@ -8947,7 +8946,7 @@ Notable Changes since Hammer * tests: many many ec test improvements (Loic Dachary) * upstart: throttle restarts (#11798 Sage Weil, Greg Farnum) - + v10.1.2 Jewel (release candidate) ================================= @@ -9165,7 +9164,7 @@ Notable Changes since v10.1.0 v10.1.0 Jewel (release candidate) ================================= - + There are a few known issues with this release candidate; see below. Known Issues with v10.1.0 @@ -11165,16 +11164,16 @@ Getting the release candidate ----------------------------- The v9.1.0 packages are pushed to the development release repositories:: - + http://download.ceph.com/rpm-testing http://download.ceph.com/debian-testing For for info, see:: - + http://docs.ceph.com/docs/master/install/get-packages/ Or install with ceph-deploy via:: - + ceph-deploy install --testing HOST @@ -11268,7 +11267,7 @@ Upgrading directly from Firefly v0.80.z is not possible. All clusters must first upgrade to Hammer v0.94.4 or a later v0.94.z release; only then is it possible to do online upgrade to Infernalis 9.2.z. -User can upgrade to latest hammer v0.94.z +User can upgrade to latest hammer v0.94.z from gitbuilder with(also refer the hammer release notes for more details):: ceph-deploy install --release hammer HOST @@ -11289,7 +11288,7 @@ Upgrading from Hammer The main notable distro that is *not* yet using systemd is Ubuntu trusty 14.04. (The next Ubuntu LTS, 16.04, will use systemd instead of upstart.) - + * Ceph daemons now run as user and group ``ceph`` by default. The ceph user has a static UID assigned by Fedora and Debian (also used by derivative distributions like RHEL/CentOS and Ubuntu). On SUSE @@ -11325,7 +11324,7 @@ Upgrading from Hammer service ceph stop # fedora, centos, rhel, debian stop ceph-all # ubuntu - + #. Fix the ownership:: chown -R ceph:ceph /var/lib/ceph @@ -12173,7 +12172,7 @@ Notable Changes * tools, test: Add ceph-objectstore-tool to operate on the meta collection (`issue#14977 `_, `pr#7911 `_, David Zafman) * unittest_crypto: benchmark 100,000 CryptoKey::encrypt() calls (`issue#14863 `_, `pr#7801 `_, Sage Weil) - + v0.94.6 Hammer ====================== @@ -12506,7 +12505,7 @@ Notable Changes For more detailed information, see :download:`the complete changelog `. - + v0.94.2 Hammer ============== @@ -15230,11 +15229,11 @@ Upgrading replaced by ``cluster_osd_bytes``). * The ``rd_kb`` and ``wr_kb`` fields in the JSON dumps for pool stats (accessed - via the ``ceph df detail -f json-pretty`` and related commands) have been - replaced with corresponding ``*_bytes`` fields. Similarly, the - ``total_space``, ``total_used``, and ``total_avail`` fields are replaced with + via the ``ceph df detail -f json-pretty`` and related commands) have been + replaced with corresponding ``*_bytes`` fields. Similarly, the + ``total_space``, ``total_used``, and ``total_avail`` fields are replaced with ``total_bytes``, ``total_used_bytes``, and ``total_avail_bytes`` fields. - + * The ``rados df --format=json`` output ``read_bytes`` and ``write_bytes`` fields were incorrectly reporting ops; this is now fixed. @@ -18259,9 +18258,9 @@ For more detailed information, see :download:`the complete changelog