Skip to content

Commit

Permalink
Merge pull request #13346 from batrick/mds-doc-fix
Browse files Browse the repository at this point in the history
doc: update to new ceph fs commands

Reviewed-by: John Spray <john.spray@redhat.com>
  • Loading branch information
John Spray committed Feb 20, 2017
2 parents ecb4820 + 8501341 commit a4fcdb6
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 6 deletions.
2 changes: 1 addition & 1 deletion doc/cephfs/file-layouts.rst
Expand Up @@ -199,7 +199,7 @@ Before you can use a pool with CephFS you have to add it to the Metadata Servers

.. code-block:: bash
$ ceph mds add_data_pool cephfs_data_ssd
$ ceph fs add_data_pool cephfs cephfs_data_ssd
# Pool should now show up
$ ceph fs ls
.... data pools: [cephfs_data cephfs_data_ssd ]
Expand Down
4 changes: 2 additions & 2 deletions doc/cephfs/hadoop.rst
Expand Up @@ -141,7 +141,7 @@ documentation`_.

Once a pool has been created and configured the metadata service must be told
that the new pool may be used to store file data. A pool is be made available
for storing file system data using the ``ceph mds add_data_pool`` command.
for storing file system data using the ``ceph fs add_data_pool`` command.

First, create the pool. In this example we create the ``hadoop1`` pool with
replication factor 1. ::
Expand All @@ -162,7 +162,7 @@ The output should resemble::
where ``3`` is the pool id. Next we will use the pool id reference to register
the pool as a data pool for storing file system data. ::

ceph mds add_data_pool 3
ceph fs add_data_pool cephfs 3

The final step is to configure Hadoop to consider this data pool when
selecting the target pool for new files. ::
Expand Down
6 changes: 3 additions & 3 deletions doc/cephfs/mantle.rst
Expand Up @@ -76,8 +76,8 @@ Mantle with `vstart.sh`

::

bin/ceph mds set allow_multimds true --yes-i-really-mean-it
bin/ceph mds set max_mds 5
bin/ceph fs set cephfs allow_multimds true --yes-i-really-mean-it
bin/ceph fs set cephfs max_mds 5
bin/ceph fs set cephfs_a balancer greedyspill.lua


Expand Down Expand Up @@ -161,7 +161,7 @@ Implementation Details
Most of the implementation is in MDBalancer. Metrics are passed to the balancer
policies via the Lua stack and a list of loads is returned back to MDBalancer.
It sits alongside the current balancer implementation and it's enabled with a
Ceph CLI command ("ceph mds set balancer mybalancer.lua"). If the Lua policy
Ceph CLI command ("ceph fs set cephfs balancer mybalancer.lua"). If the Lua policy
fails (for whatever reason), we fall back to the original metadata load
balancer. The balancer is stored in the RADOS metadata pool and a string in the
MDSMap tells the MDSs which balancer to use.
Expand Down

0 comments on commit a4fcdb6

Please sign in to comment.