query
If the latest copy of the object is not available, the cluster can be
-told to roll back to a previous version of the object. See
-:doc:`troubleshooting-pg#Unfound-objects` for more information.
+told to roll back to a previous version of the object. See
+:ref:`failures-osd-unfound` for more information.
REQUEST_SLOW
____________
diff --git a/doc/rados/operations/index.rst b/doc/rados/operations/index.rst
index 253fc2d9d0a56..4172376a101ba 100644
--- a/doc/rados/operations/index.rst
+++ b/doc/rados/operations/index.rst
@@ -11,15 +11,16 @@ restarting a cluster with the ``ceph`` service; checking the cluster's health;
and, monitoring an operating cluster.
.. toctree::
- :maxdepth: 1
-
+ :maxdepth: 1
+
operating
health-checks
monitoring
monitoring-osd-pg
user-management
+ pg-repair
-.. raw:: html
+.. raw:: html
Data Placement
@@ -48,7 +49,7 @@ CRUSH algorithm.
Low-level cluster operations consist of starting, stopping, and restarting a
particular daemon within a cluster; changing the settings of a particular
-daemon or subsystem; and, adding a daemon to the cluster or removing a daemon
+daemon or subsystem; and, adding a daemon to the cluster or removing a daemon
from the cluster. The most common use cases for low-level operations include
growing or shrinking the Ceph cluster and replacing legacy or failed hardware
with new hardware.
@@ -61,7 +62,7 @@ with new hardware.
bluestore-migration
Command Reference
-
+
.. raw:: html
@@ -72,7 +73,7 @@ you to evaluate your Ceph configuration and modify your logging and debugging
settings to identify and remedy issues you are encountering with your cluster.
.. toctree::
- :maxdepth: 1
+ :maxdepth: 1
../troubleshooting/community
../troubleshooting/troubleshooting-mon
diff --git a/doc/rados/operations/placement-groups.rst b/doc/rados/operations/placement-groups.rst
index fee833ad0c6c3..b0c6e329d629d 100644
--- a/doc/rados/operations/placement-groups.rst
+++ b/doc/rados/operations/placement-groups.rst
@@ -23,7 +23,7 @@ calculated automatically. Here are a few values commonly used:
- If you have more than 50 OSDs, you need to understand the tradeoffs
and how to calculate the ``pg_num`` value by yourself
-- For calculating ``pg_num`` value by yourself please take help of `pgcalc`_ tool
+- For calculating ``pg_num`` value by yourself please take help of `pgcalc`_ tool
As the number of OSDs increases, chosing the right value for pg_num
becomes more important because it has a significant influence on the
@@ -191,7 +191,7 @@ will degrade ~4 (i.e. ~75 / 19 placement groups being recovered)
instead of ~17 and the third OSD lost will only lose data if it is one
of the four OSDs containing the surviving copy. In other words, if the
probability of losing one OSD is 0.0001% during the recovery time
-frame, it goes from 17 * 10 * 0.0001% in the cluster with 10 OSDs to 4 * 20 *
+frame, it goes from 17 * 10 * 0.0001% in the cluster with 10 OSDs to 4 * 20 *
0.0001% in the cluster with 20 OSDs.
In a nutshell, more OSDs mean faster recovery and a lower risk of
@@ -250,6 +250,8 @@ they exist.
Minimizing the number of placement groups saves significant amounts of
resources.
+.. _choosing-number-of-placement-groups:
+
Choosing the number of Placement Groups
=======================================
@@ -412,7 +414,7 @@ than others (for example, those PGs may hold data for images used by running
machines and other PGs may be used by inactive machines/less relevant data).
In that case, you may want to prioritize recovery of those groups so
performance and/or availability of data stored on those groups is restored
-earlier. To do this (mark particular placement group(s) as prioritized during
+earlier. To do this (mark particular placement group(s) as prioritized during
backfill or recovery), execute the following::
ceph pg force-recovery {pg-id} [{pg-id #2}] [{pg-id #3} ...]
diff --git a/doc/rados/operations/pools.rst b/doc/rados/operations/pools.rst
index 70155937cad0d..10169ff62d1b7 100644
--- a/doc/rados/operations/pools.rst
+++ b/doc/rados/operations/pools.rst
@@ -6,33 +6,33 @@ When you first deploy a cluster without creating a pool, Ceph uses the default
pools for storing data. A pool provides you with:
- **Resilience**: You can set how many OSD are allowed to fail without losing data.
- For replicated pools, it is the desired number of copies/replicas of an object.
+ For replicated pools, it is the desired number of copies/replicas of an object.
A typical configuration stores an object and one additional copy
(i.e., ``size = 2``), but you can determine the number of copies/replicas.
For `erasure coded pools <../erasure-code>`_, it is the number of coding chunks
(i.e. ``m=2`` in the **erasure code profile**)
-
+
- **Placement Groups**: You can set the number of placement groups for the pool.
- A typical configuration uses approximately 100 placement groups per OSD to
- provide optimal balancing without using up too many computing resources. When
+ A typical configuration uses approximately 100 placement groups per OSD to
+ provide optimal balancing without using up too many computing resources. When
setting up multiple pools, be careful to ensure you set a reasonable number of
- placement groups for both the pool and the cluster as a whole.
+ placement groups for both the pool and the cluster as a whole.
-- **CRUSH Rules**: When you store data in a pool, a CRUSH ruleset mapped to the
- pool enables CRUSH to identify a rule for the placement of the object
- and its replicas (or chunks for erasure coded pools) in your cluster.
+- **CRUSH Rules**: When you store data in a pool, a CRUSH ruleset mapped to the
+ pool enables CRUSH to identify a rule for the placement of the object
+ and its replicas (or chunks for erasure coded pools) in your cluster.
You can create a custom CRUSH rule for your pool.
-
-- **Snapshots**: When you create snapshots with ``ceph osd pool mksnap``,
+
+- **Snapshots**: When you create snapshots with ``ceph osd pool mksnap``,
you effectively take a snapshot of a particular pool.
-
-To organize data into pools, you can list, create, and remove pools.
+
+To organize data into pools, you can list, create, and remove pools.
You can also view the utilization statistics for each pool.
List Pools
==========
-To list your cluster's pools, execute::
+To list your cluster's pools, execute::
ceph osd lspools
@@ -53,19 +53,19 @@ For details on placement group numbers refer to `setting the number of placement
application using the pool. See `Associate Pool to Application`_ below for
more information.
-For example::
+For example::
osd pool default pg num = 100
osd pool default pgp num = 100
-To create a pool, execute::
+To create a pool, execute::
ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \
[crush-rule-name] [expected-num-objects]
ceph osd pool create {pool-name} {pg-num} {pgp-num} erasure \
[erasure-code-profile] [crush-rule-name] [expected_num_objects]
-Where:
+Where:
``{pool-name}``
@@ -76,7 +76,7 @@ Where:
``{pg-num}``
:Description: The total number of placement groups for the pool. See `Placement
- Groups`_ for details on calculating a suitable number. The
+ Groups`_ for details on calculating a suitable number. The
default value ``8`` is NOT suitable for most systems.
:Type: Integer
@@ -86,7 +86,7 @@ Where:
``{pgp-num}``
:Description: The total number of placement groups for placement purposes. This
- **should be equal to the total number of placement groups**, except
+ **should be equal to the total number of placement groups**, except
for placement group splitting scenarios.
:Type: Integer
@@ -105,7 +105,7 @@ Where:
implement a subset of the available operations.
:Type: String
-:Required: No.
+:Required: No.
:Default: replicated
``[crush-rule-name]``
@@ -114,7 +114,7 @@ Where:
rule must exist.
:Type: String
-:Required: No.
+:Required: No.
:Default: For **replicated** pools it is the ruleset specified by the ``osd
pool default crush replicated ruleset`` config variable. This
ruleset must exist.
@@ -128,11 +128,11 @@ Where:
.. _erasure code profile: ../erasure-code-profile
:Description: For **erasure** pools only. Use the `erasure code profile`_. It
- must be an existing profile as defined by
+ must be an existing profile as defined by
**osd erasure-code-profile set**.
:Type: String
-:Required: No.
+:Required: No.
When you create a pool, set the number of placement groups to a reasonable value
(e.g., ``100``). Consider the total number of placement groups per OSD too.
@@ -155,7 +155,9 @@ placement groups for your pool.
:Type: Integer
:Required: No.
-:Default: 0, no splitting at the pool creation time.
+:Default: 0, no splitting at the pool creation time.
+
+.. _associate-pool-to-application:
Associate Pool to Application
=============================
@@ -177,12 +179,12 @@ a pool.::
Set Pool Quotas
===============
-You can set pool quotas for the maximum number of bytes and/or the maximum
+You can set pool quotas for the maximum number of bytes and/or the maximum
number of objects per pool. ::
- ceph osd pool set-quota {pool-name} [max_objects {obj-count}] [max_bytes {bytes}]
+ ceph osd pool set-quota {pool-name} [max_objects {obj-count}] [max_bytes {bytes}]
-For example::
+For example::
ceph osd pool set-quota data max_objects 10000
@@ -203,7 +205,7 @@ configuration. Otherwise they will refuse to remove a pool.
See `Monitor Configuration`_ for more information.
.. _Monitor Configuration: ../../configuration/mon-config-ref
-
+
If you created your own rulesets and rules for a pool you created, you should
consider removing them when you no longer need your pool::
@@ -226,42 +228,42 @@ exists, you should consider deleting those users too::
Rename a Pool
=============
-To rename a pool, execute::
+To rename a pool, execute::
ceph osd pool rename {current-pool-name} {new-pool-name}
-If you rename a pool and you have per-pool capabilities for an authenticated
+If you rename a pool and you have per-pool capabilities for an authenticated
user, you must update the user's capabilities (i.e., caps) with the new pool
-name.
+name.
.. note:: Version ``0.48`` Argonaut and above.
Show Pool Statistics
====================
-To show a pool's utilization statistics, execute::
+To show a pool's utilization statistics, execute::
rados df
-
+
Make a Snapshot of a Pool
=========================
-To make a snapshot of a pool, execute::
+To make a snapshot of a pool, execute::
+
+ ceph osd pool mksnap {pool-name} {snap-name}
- ceph osd pool mksnap {pool-name} {snap-name}
-
.. note:: Version ``0.48`` Argonaut and above.
Remove a Snapshot of a Pool
===========================
-To remove a snapshot of a pool, execute::
+To remove a snapshot of a pool, execute::
ceph osd pool rmsnap {pool-name} {snap-name}
-.. note:: Version ``0.48`` Argonaut and above.
+.. note:: Version ``0.48`` Argonaut and above.
.. _setpoolvalues:
@@ -269,15 +271,16 @@ To remove a snapshot of a pool, execute::
Set Pool Values
===============
-To set a value to a pool, execute the following::
+To set a value to a pool, execute the following::
ceph osd pool set {pool-name} {key} {value}
-
-You may set values for the following keys:
+
+You may set values for the following keys:
.. _compression_algorithm:
``compression_algorithm``
+
:Description: Sets inline compression algorithm to use for underlying BlueStore.
This setting overrides the `global setting `_ of ``bluestore compression algorithm``.
@@ -310,8 +313,8 @@ You may set values for the following keys:
``size``
-:Description: Sets the number of replicas for objects in the pool.
- See `Set the Number of Object Replicas`_ for further details.
+:Description: Sets the number of replicas for objects in the pool.
+ See `Set the Number of Object Replicas`_ for further details.
Replicated pools only.
:Type: Integer
@@ -320,8 +323,8 @@ You may set values for the following keys:
``min_size``
-:Description: Sets the minimum number of replicas required for I/O.
- See `Set the Number of Object Replicas`_ for further details.
+:Description: Sets the minimum number of replicas required for I/O.
+ See `Set the Number of Object Replicas`_ for further details.
Replicated pools only.
:Type: Integer
@@ -331,7 +334,7 @@ You may set values for the following keys:
``pg_num``
-:Description: The effective number of placement groups to use when calculating
+:Description: The effective number of placement groups to use when calculating
data placement.
:Type: Integer
:Valid Range: Superior to ``pg_num`` current value.
@@ -340,7 +343,7 @@ You may set values for the following keys:
``pgp_num``
-:Description: The effective number of placement groups for placement to use
+:Description: The effective number of placement groups for placement to use
when calculating data placement.
:Type: Integer
@@ -370,7 +373,7 @@ You may set values for the following keys:
:Description: Set/Unset HASHPSPOOL flag on a given pool.
:Type: Integer
:Valid Range: 1 sets flag, 0 unsets flag
-:Version: Version ``0.48`` Argonaut and above.
+:Version: Version ``0.48`` Argonaut and above.
.. _nodelete:
@@ -438,7 +441,7 @@ You may set values for the following keys:
``hit_set_count``
-:Description: The number of hit sets to store for cache pools. The higher
+:Description: The number of hit sets to store for cache pools. The higher
the number, the more RAM consumed by the ``ceph-osd`` daemon.
:Type: Integer
@@ -448,8 +451,8 @@ You may set values for the following keys:
``hit_set_period``
-:Description: The duration of a hit set period in seconds for cache pools.
- The higher the number, the more RAM consumed by the
+:Description: The duration of a hit set period in seconds for cache pools.
+ The higher the number, the more RAM consumed by the
``ceph-osd`` daemon.
:Type: Integer
@@ -470,10 +473,10 @@ You may set values for the following keys:
``cache_target_dirty_ratio``
-:Description: The percentage of the cache pool containing modified (dirty)
+:Description: The percentage of the cache pool containing modified (dirty)
objects before the cache tiering agent will flush them to the
backing storage pool.
-
+
:Type: Double
:Default: ``.4``
@@ -495,7 +498,7 @@ You may set values for the following keys:
:Description: The percentage of the cache pool containing unmodified (clean)
objects before the cache tiering agent will evict them from the
cache pool.
-
+
:Type: Double
:Default: ``.8``
@@ -503,17 +506,17 @@ You may set values for the following keys:
``target_max_bytes``
-:Description: Ceph will begin flushing or evicting objects when the
+:Description: Ceph will begin flushing or evicting objects when the
``max_bytes`` threshold is triggered.
-
+
:Type: Integer
:Example: ``1000000000000`` #1-TB
.. _target_max_objects:
-``target_max_objects``
+``target_max_objects``
-:Description: Ceph will begin flushing or evicting objects when the
+:Description: Ceph will begin flushing or evicting objects when the
``max_objects`` threshold is triggered.
:Type: Integer
@@ -540,11 +543,11 @@ You may set values for the following keys:
``cache_min_flush_age``
-:Description: The time (in seconds) before the cache tiering agent will flush
+:Description: The time (in seconds) before the cache tiering agent will flush
an object from the cache pool to the storage pool.
-
+
:Type: Integer
-:Example: ``600`` 10min
+:Example: ``600`` 10min
.. _cache_min_evict_age:
@@ -552,7 +555,7 @@ You may set values for the following keys:
:Description: The time (in seconds) before the cache tiering agent will evict
an object from the cache pool.
-
+
:Type: Integer
:Example: ``1800`` 30min
@@ -607,11 +610,11 @@ You may set values for the following keys:
Get Pool Values
===============
-To get a value from a pool, execute the following::
+To get a value from a pool, execute the following::
ceph osd pool get {pool-name} {key}
-
-You may get values for the following keys:
+
+You may get values for the following keys:
``size``
@@ -691,18 +694,18 @@ You may get values for the following keys:
``cache_target_full_ratio``
:Description: see cache_target_full_ratio_
-
+
:Type: Double
``target_max_bytes``
:Description: see target_max_bytes_
-
+
:Type: Integer
-``target_max_objects``
+``target_max_objects``
:Description: see target_max_objects_
@@ -712,14 +715,14 @@ You may get values for the following keys:
``cache_min_flush_age``
:Description: see cache_min_flush_age_
-
+
:Type: Integer
``cache_min_evict_age``
:Description: see cache_min_evict_age_
-
+
:Type: Integer
@@ -754,19 +757,19 @@ You may get values for the following keys:
Set the Number of Object Replicas
=================================
-To set the number of object replicas on a replicated pool, execute the following::
+To set the number of object replicas on a replicated pool, execute the following::
ceph osd pool set {poolname} size {num-replicas}
.. important:: The ``{num-replicas}`` includes the object itself.
- If you want the object and two copies of the object for a total of
+ If you want the object and two copies of the object for a total of
three instances of the object, specify ``3``.
-
-For example::
+
+For example::
ceph osd pool set data size 3
-You may execute this command for each pool. **Note:** An object might accept
+You may execute this command for each pool. **Note:** An object might accept
I/Os in degraded mode with fewer than ``pool size`` replicas. To set a minimum
number of required replicas for I/O, you should use the ``min_size`` setting.
For example::
@@ -780,12 +783,12 @@ This ensures that no object in the data pool will receive I/O with fewer than
Get the Number of Object Replicas
=================================
-To get the number of object replicas, execute the following::
+To get the number of object replicas, execute the following::
ceph osd dump | grep 'replicated size'
-
+
Ceph will list the pools, with the ``replicated size`` attribute highlighted.
-By default, ceph creates two replicas of an object (a total of three copies, or
+By default, ceph creates two replicas of an object (a total of three copies, or
a size of 3).
diff --git a/doc/radosgw/adminops.rst b/doc/radosgw/adminops.rst
index 422dd16527a44..281eb00b441f2 100644
--- a/doc/radosgw/adminops.rst
+++ b/doc/radosgw/adminops.rst
@@ -347,7 +347,9 @@ Request Parameters
:Type: String
:Example: ``foo_user``
:Required: Yes
-A tenant name may also specified as a part of ``uid``, by following the syntax ``tenant$user``, refer to `Multitenancy`_ for more details.
+
+A tenant name may also specified as a part of ``uid``, by following the syntax
+``tenant$user``, refer to `Multitenancy`_ for more details.
``display-name``
@@ -415,6 +417,7 @@ A tenant name may also specified as a part of ``uid``, by following the syntax `
:Required: No
.. versionadded:: Jewel
+
``tenant``
:Description: the Tenant under which a user is a part of.
@@ -764,7 +767,7 @@ Create Subuser
==============
Create a new subuser (primarily useful for clients using the Swift API).
-Note that in general for a subuser to be useful, it must be granted
+Note that in general for a subuser to be useful, it must be granted
permissions by specifying ``access``. As with user creation if
``subuser`` is specified without ``secret``, then a secret key will
be automatically generated.
@@ -1856,10 +1859,10 @@ Valid parameters for quotas include:
- **Maximum Objects:** The ``max-objects`` setting allows you to specify
the maximum number of objects. A negative value disables this setting.
-
+
- **Maximum Size:** The ``max-size`` option allows you to specify a quota
for the maximum number of bytes. A negative value disables this setting.
-
+
- **Quota Type:** The ``quota-type`` option sets the scope for the quota.
The options are ``bucket`` and ``user``.
@@ -1869,7 +1872,7 @@ Valid parameters for quotas include:
Get User Quota
~~~~~~~~~~~~~~
-To get a quota, the user must have ``users`` capability set with ``read``
+To get a quota, the user must have ``users`` capability set with ``read``
permission. ::
GET /admin/user?quota&uid="a-type=user
@@ -1878,7 +1881,7 @@ permission. ::
Set User Quota
~~~~~~~~~~~~~~
-To set a quota, the user must have ``users`` capability set with ``write``
+To set a quota, the user must have ``users`` capability set with ``write``
permission. ::
PUT /admin/user?quota&uid="a-type=user
@@ -1891,7 +1894,7 @@ as encoded in the corresponding read operation.
Get Bucket Quota
~~~~~~~~~~~~~~~~
-To get a quota, the user must have ``users`` capability set with ``read``
+To get a quota, the user must have ``users`` capability set with ``read``
permission. ::
GET /admin/user?quota&uid="a-type=bucket
@@ -1900,7 +1903,7 @@ permission. ::
Set Bucket Quota
~~~~~~~~~~~~~~~~
-To set a quota, the user must have ``users`` capability set with ``write``
+To set a quota, the user must have ``users`` capability set with ``write``
permission. ::
PUT /admin/user?quota&uid="a-type=bucket
diff --git a/doc/release-notes.rst b/doc/release-notes.rst
index aa42079302a4a..a904ba58a0cd0 100644
--- a/doc/release-notes.rst
+++ b/doc/release-notes.rst
@@ -256,8 +256,8 @@ Major Changes from Kraken
- ``ceph osd crush {set,rm}-device-class`` manage the new
CRUSH *device class* feature. Note that manually creating or deleting
a device class name is generally not necessary as it will be smart
- enough to be self-managed. ``ceph osd crush class ls`` and
- ``ceph osd crush class ls-osd`` will output all existing device classes
+ enough to be self-managed. ``ceph osd crush class ls`` and
+ ``ceph osd crush class ls-osd`` will output all existing device classes
and a list of OSD ids under the given device class respectively.
- ``ceph osd crush rule create-replicated`` replaces the old
``ceph osd crush rule create-simple`` command to create a CRUSH
@@ -440,8 +440,7 @@ Upgrade compatibility notes, Kraken to Luminous
* The configuration option ``osd pool erasure code stripe width`` has
been replaced by ``osd pool erasure code stripe unit``, and given
the ability to be overridden by the erasure code profile setting
- ``stripe_unit``. For more details see
- :doc:`/rados/operations/erasure-code/#erasure-code-profiles`.
+ ``stripe_unit``. For more details see :ref:`erasure-code-profiles`.
* rbd and cephfs can use erasure coding with bluestore. This may be
enabled by setting ``allow_ec_overwrites`` to ``true`` for a pool. Since
@@ -704,7 +703,7 @@ Notable Changes
v12.1.3 Luminous (RC)
-====================
+=====================
This is the fourth release candidate for Luminous, the next long term stable
release.
@@ -1030,7 +1029,7 @@ stable release.
* rgw: raise debug level of RGWPostObj_ObjStore_S3::get_policy (`pr#16203 `_, Shasha Lu)
* rgw: req xml params size limitation error msg (`pr#16310 `_, Enming Zhang)
* rgw: restore admin socket path in mrgw.sh (`pr#16540 `_, Casey Bodley)
-* rgw: rgw_file: properly & |'d flags (`issue#20663 `_, `pr#16448 `_, Matt Benjamin)
+* rgw: rgw_file: properly & \|'d flags (`issue#20663 `_, `pr#16448 `_, Matt Benjamin)
* rgw: rgw multisite: feature of bucket sync enable/disable (`pr#15801 `_, Zhang Shaowen, Casey Bodley, Zengran Zhang)
* rgw: should unlock when reshard_log->update() reture non-zero in RGWB… (`pr#16502 `_, Wei Qiaomiao)
* rgw: test,rgw: fix rgw placement rule pool config option (`pr#16380 `_, Jiaying Ren)
@@ -1119,7 +1118,7 @@ Other Notable Changes
* cephfs: Remove "experimental" warnings from multimds (`pr#15154 `_, John Spray, "Yan, Zheng")
* cleanup: test,mon,msg: kill clang analyzer warnings (`pr#16320 `_, Kefu Chai)
* cmake: fix the build with -DWITH_ZFS=ON (`pr#15907 `_, Kefu Chai)
-* cmake: Rewrite HAVE_BABELTRACE option to WITH_ (`pr#15305 `_, Willem Jan Withagen)
+* cmake: Rewrite HAVE_BABELTRACE option to WITH (`pr#15305 `_, Willem Jan Withagen)
* common: auth/RotatingKeyRing: use std::move() to set secrets (`pr#15866 `_, Kefu Chai)
* common: ceph.in, mgr: misc cleanups (`pr#16229 `_, liuchang0812)
* common: common,config: OPT_FLOAT and OPT_DOUBLE output format in config show (`issue#20104 `_, `pr#15647 `_, Yanhu Cao)
@@ -1237,7 +1236,7 @@ Other Notable Changes
* mon: Division by zero in PGMapDigest::dump_pool_stats_full() (`pr#15901 `_, Jos Collin)
* mon: do crushtool test with fork and timeout, but w/o exec of crushtool (`issue#19964 `_, `pr#16025 `_, Sage Weil)
* mon: Filter `log last` output by severity and channel (`pr#15924 `_, John Spray)
-* mon: fix hang on deprecated/removed 'pg set_\*full_ratio' commands (`issue#20600 `_, `pr#16300 `_, Sage Weil)
+* mon: fix hang on deprecated/removed 'pg set_*full_ratio' commands (`issue#20600 `_, `pr#16300 `_, Sage Weil)
* mon: fix kvstore type in mon compact command (`pr#15954 `_, liuchang0812)
* mon: Fix status output warning for mon_warn_osd_usage_min_max_delta (`issue#20544 `_, `pr#16220 `_, David Zafman)
* mon: handle cases where store->get() may return error (`issue#19601 `_, `pr#14678 `_, Jos Collin)
@@ -3938,7 +3937,7 @@ of BlueStore include:
The BlueStore on-disk format is expected to continue to evolve. However, we
will provide support in the OSD to migrate to the new format on upgrade.
-
+
.. note: BlueStore is still marked "experimental" in Kraken. We
recommend its use for proof-of-concept and test environments, or
other cases where data loss can be tolerated. Although it is
@@ -6768,7 +6767,7 @@ Major Changes from Infernalis
and similar projects.
* There is now experimental support for multiple CephFS file systems
within a single cluster.
-
+
- *RGW*:
* The multisite feature has been almost completely rearchitected and
@@ -6792,7 +6791,7 @@ Major Changes from Infernalis
and a new rbd-mirror daemon that performs the cross-cluster
replication.
* The exclusive-lock, object-map, fast-diff, and journaling features
- can be enabled or disabled dynamically. The deep-flatten features
+ can be enabled or disabled dynamically. The deep-flatten features
can be disabled dynamically but not re-enabled.
* The RBD CLI has been rewritten to provide command-specific help
and full bash completion support.
@@ -8947,7 +8946,7 @@ Notable Changes since Hammer
* tests: many many ec test improvements (Loic Dachary)
* upstart: throttle restarts (#11798 Sage Weil, Greg Farnum)
-
+
v10.1.2 Jewel (release candidate)
=================================
@@ -9165,7 +9164,7 @@ Notable Changes since v10.1.0
v10.1.0 Jewel (release candidate)
=================================
-
+
There are a few known issues with this release candidate; see below.
Known Issues with v10.1.0
@@ -11165,16 +11164,16 @@ Getting the release candidate
-----------------------------
The v9.1.0 packages are pushed to the development release repositories::
-
+
http://download.ceph.com/rpm-testing
http://download.ceph.com/debian-testing
For for info, see::
-
+
http://docs.ceph.com/docs/master/install/get-packages/
Or install with ceph-deploy via::
-
+
ceph-deploy install --testing HOST
@@ -11268,7 +11267,7 @@ Upgrading directly from Firefly v0.80.z is not possible. All clusters
must first upgrade to Hammer v0.94.4 or a later v0.94.z release; only
then is it possible to do online upgrade to Infernalis 9.2.z.
-User can upgrade to latest hammer v0.94.z
+User can upgrade to latest hammer v0.94.z
from gitbuilder with(also refer the hammer release notes for more details)::
ceph-deploy install --release hammer HOST
@@ -11289,7 +11288,7 @@ Upgrading from Hammer
The main notable distro that is *not* yet using systemd is Ubuntu trusty
14.04. (The next Ubuntu LTS, 16.04, will use systemd instead of upstart.)
-
+
* Ceph daemons now run as user and group ``ceph`` by default. The
ceph user has a static UID assigned by Fedora and Debian (also used
by derivative distributions like RHEL/CentOS and Ubuntu). On SUSE
@@ -11325,7 +11324,7 @@ Upgrading from Hammer
service ceph stop # fedora, centos, rhel, debian
stop ceph-all # ubuntu
-
+
#. Fix the ownership::
chown -R ceph:ceph /var/lib/ceph
@@ -12173,7 +12172,7 @@ Notable Changes
* tools, test: Add ceph-objectstore-tool to operate on the meta collection (`issue#14977 `_, `pr#7911 `_, David Zafman)
* unittest_crypto: benchmark 100,000 CryptoKey::encrypt() calls (`issue#14863 `_, `pr#7801 `_, Sage Weil)
-
+
v0.94.6 Hammer
======================
@@ -12506,7 +12505,7 @@ Notable Changes
For more detailed information, see :download:`the complete changelog `.
-
+
v0.94.2 Hammer
==============
@@ -15230,11 +15229,11 @@ Upgrading
replaced by ``cluster_osd_bytes``).
* The ``rd_kb`` and ``wr_kb`` fields in the JSON dumps for pool stats (accessed
- via the ``ceph df detail -f json-pretty`` and related commands) have been
- replaced with corresponding ``*_bytes`` fields. Similarly, the
- ``total_space``, ``total_used``, and ``total_avail`` fields are replaced with
+ via the ``ceph df detail -f json-pretty`` and related commands) have been
+ replaced with corresponding ``*_bytes`` fields. Similarly, the
+ ``total_space``, ``total_used``, and ``total_avail`` fields are replaced with
``total_bytes``, ``total_used_bytes``, and ``total_avail_bytes`` fields.
-
+
* The ``rados df --format=json`` output ``read_bytes`` and ``write_bytes``
fields were incorrectly reporting ops; this is now fixed.
@@ -18259,9 +18258,9 @@ For more detailed information, see :download:`the complete changelog |