Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pybind/mgr/dashboard: initial block integration #15521

Merged
merged 1 commit into from Jun 27, 2017

Conversation

Projects
None yet
3 participants
@dillaman
Copy link
Contributor

dillaman commented Jun 6, 2017

Signed-off-by: Jason Dillaman dillaman@redhat.com

@dillaman dillaman requested a review from jcsp Jun 6, 2017

@@ -97,6 +97,25 @@
return truncated_float + units[unit];
};
rivets.formatters.dimless_binary = function(n){

This comment has been minimized.

Copy link
@jcsp

jcsp Jun 6, 2017

Contributor

Could take the common bits of this and dimless into a function that just takes the 1024/1000 and the list of suffixes?

self.log.error("Failed to open pool " + pool)
return self.rbd_pools

class RbdPools(VersionedSyncObject):

This comment has been minimized.

Copy link
@jcsp

jcsp Jun 6, 2017

Contributor

The whole VersionedSyncObject is a bit of a red herring here, I should rip it out. It's a hangover from Calamari when there was a layer beneath that used the version info to work out when something fresh needed fetching.

This comment has been minimized.

Copy link
@dillaman

dillaman Jun 6, 2017

Author Contributor

Yeah -- I couldn't for the life of me figure out how it was being being used. I just figured it was a "TODO" so if I could just add code to cache RBD pools per epoch, it's one less thing to change if and when it does cache.

ctx_capsule = ceph_state.get_context()

self.log.info("Constructing Rados")
cluster = rados.Rados(context=ctx_capsule)

This comment has been minimized.

Copy link
@jcsp

jcsp Jun 6, 2017

Contributor

when I wrote the rbd_ls.py stuff it was the only place spinning up a rados.Rados -- now that we're using it elsewhere, the instance should probably be shared

@jcsp

This comment has been minimized.

Copy link
Contributor

jcsp commented Jun 6, 2017

I think the idea of scanning for rbd-using pools per OSD epoch is probably overkill, and also not 100% watertight because if I create a pool it's initially not an rbd pool, it's only when I write some stuff that it becomes one (and of course writing some stuff doesn't affect the epoch).

We could have a class that only updates itself when it either sees new pools in the osdmap (it can read that every time since osdmap is right here in memory), or if there was a pool which previously had zero objects and now has >0 objects. That would cover all the "normal" cases, although it could still be fooled by someone who e.g. repurposed a cephfs data pool for use with rbd without telling us.

I kind of left this out when doing the quickie rbd_ls, because I wasn't sure whether we would do something like this first, or whether we would do something that let us explicitly tag pools in the osdmap with what subsystem was going to use them.

@liewegas

This comment has been minimized.

Copy link
Member

liewegas commented Jun 6, 2017

@dillaman

This comment has been minimized.

Copy link
Contributor Author

dillaman commented Jun 6, 2017

@jcsp @liewegas RBD currently creates and deletes a self-managed snapshot if the "rbd_directory" object is missing, so that is why I was using the epoch for tracking. As for tags, the initial "rbd create" on the pool could easily set a tag if unset.

@jcsp

This comment has been minimized.

Copy link
Contributor

jcsp commented Jun 6, 2017

Ah, I did not realise that there was a snapshot when rbd_directory was created.

Setting the tag during rbd create would need higher caps than an rbd admin normally needs to just write to OSDs, but I imagine in many (most?) cases it's really going to be someone using client.admin key to run the rbd CLI anyway.

@jcsp

This comment has been minimized.

Copy link
Contributor

jcsp commented Jun 8, 2017

I've reworked a bit here if you want to cherry-pick it: https://github.com/jcsp/ceph/tree/wip-rbd-dashboard-jcsp

This depends on #15577

@dillaman dillaman force-pushed the dillaman:wip-rbd-dashboard branch from c84bc41 to aba76e2 Jun 8, 2017

@dillaman

This comment has been minimized.

Copy link
Contributor Author

dillaman commented Jun 8, 2017

@jcsp update pushed

@@ -376,6 +380,20 @@ def _toplevel_data(self):
"""
Data consumed by the base.html template
"""
osd_map = global_instance().get_sync_object(OsdMap).data

This comment has been minimized.

Copy link
@jcsp

jcsp Jun 21, 2017

Contributor

unused instance of osd_map

pybind/mgr/dashboard: initial block integration
Signed-off-by: Jason Dillaman <dillaman@redhat.com>

@dillaman dillaman force-pushed the dillaman:wip-rbd-dashboard branch from aba76e2 to 82e4816 Jun 23, 2017

@jcsp

jcsp approved these changes Jun 27, 2017

@jcsp jcsp merged commit adf2036 into ceph:master Jun 27, 2017

3 of 4 checks passed

make check make check failed
Details
Signed-off-by all commits in this PR are signed
Details
Unmodified Submodules submodules for project are unmodified
Details
arm64 make check arm64 make check succeeded
Details

@dillaman dillaman deleted the dillaman:wip-rbd-dashboard branch Jun 27, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.