New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

luminous: mds: sanitize mdsmap of removed pools #18628

merged 3 commits into from Nov 7, 2017


None yet
3 participants

batrick commented Oct 30, 2017

batrick added some commits Oct 3, 2017

mds: reduce variable scope
Signed-off-by: Patrick Donnelly <>
(cherry picked from commit 37884a4)
mds: clean up non-existent data pools in MDSMap
Older versions of Ceph weren't strict about preventing pool deletion when the
MDSMap referred to to-be-deleted pool. If we are dealing with a cluster
upgrade, we should try to gracefully handle that by cleaning out data pools
that have been removed.

Reproduced this by allowing CephFS pools to be deleted:

diff --git a/src/mon/ b/src/mon/
index 85c47c13da6..694b240cb9f 100644
--- a/src/mon/
+++ b/src/mon/
@@ -10962,7 +10962,7 @@ int OSDMonitor::_check_remove_pool(int64_t pool_id, const pg_pool_t& pool,
   FSMap const &pending_fsmap = mon->mdsmon()->get_pending();
   if (pending_fsmap.pool_in_use(pool_id)) {
     *ss << "pool '" << poolstr << "' is in use by CephFS";
-    return -EBUSY;
+    //return -EBUSY;

   if (pool.tier_of >= 0) {

pdonnell@icewind ~/ceph/build$ bin/ceph osd pool create derp 4 4
pool 'derp' created
pdonnell@icewind ~/ceph/build$ bin/ceph fs add_data_pool cephfs_a derp
added data pool 3 to fsmap
pdonnell@icewind ~/ceph/build$ bin/ceph osd pool rm derp derp --yes-i-really-really-mean-it
pool 'derp' is in use by CephFSpool 'derp' removed
pdonnell@icewind ~/ceph/build$ bin/ceph fs ls
2017-10-03 12:50:48.409561 7f9e2e05b700 -1 /home/pdonnell/ceph/src/osd/OSDMap.h: In function 'const string& OSDMap::get_pool_name(int64_t) const' thread 7f9e2e05b700 time 2017-10-03 12:50:48.407897
/home/pdonnell/ceph/src/osd/OSDMap.h: 1184: FAILED assert(i != pool_name.end())

 ceph version 12.1.2-2624-g37884a41964 (37884a4) mimic (dev)
  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0xf5) [0x564ebb5420f5]
  2: (()+0x41dade) [0x564ebb3cbade]
  3: (MDSMonitor::preprocess_command(boost::intrusive_ptr<MonOpRequest>)+0x1fb9) [0x564ebb4cd119]

Note when testing this fix, use something like this after removing the data pool:

pdonnell@icewind ~/ceph/build$ bin/ceph fs set cephfs_a max_mds 2

Setting max_mds will cause a new FSMap to be created where MDSMap::sanitize is
called; this is simulating the initial load+sanitize of a Hammer legacy MDSMap
by the mons.


Signed-off-by: Patrick Donnelly <>

(cherry picked from commit 7adf0fb)
MDSMonitor: wait for readable OSDMap before sanitizing

Signed-off-by: Patrick Donnelly <>
(cherry picked from commit ca52f3b)

@batrick batrick added the cephfs label Oct 30, 2017

@batrick batrick added this to the luminous milestone Oct 30, 2017

@ukernel ukernel changed the title from mds: sanitize mdsmap of removed pools to luminous: mds: sanitize mdsmap of removed pools Nov 2, 2017


This comment has been minimized.

Show comment
Hide comment

theanalyst commented Nov 7, 2017

@theanalyst theanalyst merged commit 77c2b0d into ceph:luminous Nov 7, 2017

4 checks passed

Docs: build check OK - docs built
Signed-off-by all commits in this PR are signed
Unmodified Submodules submodules for project are unmodified
make check make check succeeded

@batrick batrick deleted the batrick:i21953 branch Dec 14, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment