Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BZ #1180158: Ceilometer workload partitioning with tooz & redis #449

Closed
wants to merge 2 commits into from

Conversation

eglynn
Copy link

@eglynn eglynn commented Jan 12, 2015

https://bugzilla.redhat.com/show_bug.cgi?id=1180158

The quickstack::pacemaker:ceilometer puppet class now allows
redis to be specified as the backend for tooz, to be used for
workload partitioning in the ceilometer central agent and
alarm evaluator.

We do not create a pacemaker resource for redis, as this will
instead be self-monitored via the redis-sentinel service (via
a subsequent patch, once the support for sentinel in tooz is
packaged python-tooz).

@eglynn
Copy link
Author

eglynn commented Jan 12, 2015

Capturing some relevant discussion from IRC ...

 <jayg> eglynn-office: before I forget, since convo is moved to your new PR, I like your thought of the vip +haproxy on the sentinels, that may solve the part I was concerned about
 <eglynn-office> jayg: thanks ... a potentially even simpler approach is being proposed to tooz by cdent here https://review.openstack.org/146463
 <eglynn-office> jayg: ... i.e. allowing *all* of the sentinels to be identified in the backend URL
 <jayg> eglynn-office: sure, that works too, more like we do with mongo already
 <eglynn-office> jayg: coolness

https://bugzilla.redhat.com/show_bug.cgi?id=1180158

The quickstack::pacemaker:ceilometer puppet class now allows
redis to be specified as the backend for tooz, to be used for
workload partitioning in the ceilometer central agent and
alarm evaluator.

We do not create a pacemaker resource for redis, as this will
instead be self-monitored via the redis-sentinel service (via
a subsequent patch, once the support for sentinel in tooz is
packaged python-tooz).
https://bugzilla.redhat.com/show_bug.cgi?id=1180158

We use redis-sentinel to monitor the redis instances running on
controller nodes, and to detect when mastership needs to be failed
over to a slave.

The sentinel cluster has a default quorum of 2, so can survive the
loss of one controller on a minimally sized HA controller plane.

The ceilometer backend_url contains a list of fallback sentinels,
similar in style to the mongodb connect URL, except that the additional
addresses are specified as query parameters.
$memcached_port = '11211',
$db_port = '27017',
$verbose = 'false',
$coordination_backend = 'redis',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Super minor nit - could you put these new params in alphabetical order? I think it is easier to find things that way

@jguiditta
Copy link
Member

Minor nit aside, is this ready for me to test against release repos? If so, and we have all needed acks, we can try to include it tomorrow

@eglynn
Copy link
Author

eglynn commented Jan 14, 2015

@jguiditta: to be fully functional, the second commit of this pull request requires new builds of openstack-puppet-modules and python-tooz. After discussions around the o-p-m rebuild yesterday, it was decided by eng mgmt not to incur the risk of rebuilding o-p-m prior to the OSP6 release candidate.

The first commit on the other hand, should work against the release repos, as it does not install sentinel (the aspect for which the new build of o-p-m would be required).

From our discussion on the original pull-request, I suspect that you may be leery of only including the basic redis support without sentinel in OSP6. However if that kind of incremental approach is indeed a runner, I could recreate the pull request with only the initial commit.

@jguiditta
Copy link
Member

Closing this, as the functionality is included in #506

@jguiditta jguiditta closed this Apr 8, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants