From 8501341886287fa3b5cbcba3e0af4395caf65571 Mon Sep 17 00:00:00 2001 From: Patrick Donnelly Date: Thu, 9 Feb 2017 21:19:31 -0500 Subject: [PATCH] doc: update to new ceph fs commands These `ceph mds ...` commands are deprecated. Signed-off-by: Patrick Donnelly --- doc/cephfs/file-layouts.rst | 2 +- doc/cephfs/hadoop.rst | 4 ++-- doc/cephfs/mantle.rst | 6 +++--- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/doc/cephfs/file-layouts.rst b/doc/cephfs/file-layouts.rst index e72fa69b06259..a268016577349 100644 --- a/doc/cephfs/file-layouts.rst +++ b/doc/cephfs/file-layouts.rst @@ -199,7 +199,7 @@ Before you can use a pool with CephFS you have to add it to the Metadata Servers .. code-block:: bash - $ ceph mds add_data_pool cephfs_data_ssd + $ ceph fs add_data_pool cephfs cephfs_data_ssd # Pool should now show up $ ceph fs ls .... data pools: [cephfs_data cephfs_data_ssd ] diff --git a/doc/cephfs/hadoop.rst b/doc/cephfs/hadoop.rst index fc3f3000fb0d7..76d26f27d4ba8 100644 --- a/doc/cephfs/hadoop.rst +++ b/doc/cephfs/hadoop.rst @@ -141,7 +141,7 @@ documentation`_. Once a pool has been created and configured the metadata service must be told that the new pool may be used to store file data. A pool is be made available -for storing file system data using the ``ceph mds add_data_pool`` command. +for storing file system data using the ``ceph fs add_data_pool`` command. First, create the pool. In this example we create the ``hadoop1`` pool with replication factor 1. :: @@ -162,7 +162,7 @@ The output should resemble:: where ``3`` is the pool id. Next we will use the pool id reference to register the pool as a data pool for storing file system data. :: - ceph mds add_data_pool 3 + ceph fs add_data_pool cephfs 3 The final step is to configure Hadoop to consider this data pool when selecting the target pool for new files. :: diff --git a/doc/cephfs/mantle.rst b/doc/cephfs/mantle.rst index 3631614365e73..8a7d729ac38fe 100644 --- a/doc/cephfs/mantle.rst +++ b/doc/cephfs/mantle.rst @@ -76,8 +76,8 @@ Mantle with `vstart.sh` :: - bin/ceph mds set allow_multimds true --yes-i-really-mean-it - bin/ceph mds set max_mds 5 + bin/ceph fs set cephfs allow_multimds true --yes-i-really-mean-it + bin/ceph fs set cephfs max_mds 5 bin/ceph fs set cephfs_a balancer greedyspill.lua @@ -161,7 +161,7 @@ Implementation Details Most of the implementation is in MDBalancer. Metrics are passed to the balancer policies via the Lua stack and a list of loads is returned back to MDBalancer. It sits alongside the current balancer implementation and it's enabled with a -Ceph CLI command ("ceph mds set balancer mybalancer.lua"). If the Lua policy +Ceph CLI command ("ceph fs set cephfs balancer mybalancer.lua"). If the Lua policy fails (for whatever reason), we fall back to the original metadata load balancer. The balancer is stored in the RADOS metadata pool and a string in the MDSMap tells the MDSs which balancer to use.