-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ansible: add roles for contiv storage service #20
Conversation
- this shall start ceph and volmaster/supervisor/plugin service Signed-off-by: Madhav Puri <madhav.puri@gmail.com>
@@ -0,0 +1 @@ | |||
VOLSUPERVISOR_ARGS="--debug" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would recommend leaving this blank; volsupervisor is extremely chatty in debug mode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure, will do.
ok, one last thing: ceph mons are more like etcd than block storage, they are a consensus database. The OSDs should be numerous, but the mons should not, and be an odd number like zookeeper/etc. I think we should distribute a volmaster with each mon, and point other docker volplugin instances at those. @shaleman might have some strong opinions on the architecture here too. Here's some more info: http://docs.ceph.com/docs/v0.71/rados/configuration/mon-config-ref/ |
I see, so I think I will make the change to run For etcd, I have seen it work ok with 2 nodes but I agree for consensus usually we need odd number of nodes and we should be able to achieve that by having odd number of hosts in |
That sounds like a great plan. Do you want to merge this as-is or wait until this work is done? |
cool...yeah, I will update the diffs to incorporate the comment for removing the Will update the diffs in a bit and ask for you to take one more look. |
@erikh this should be ready for another look. I took care of the comments or removing the Right now it comes with a caveat that the ceph mons and osds related host-groups need to be provisioned together in same run of the ansible playbook. I will open an issue to enhance ceph playbook for allowing incremental configuration of mons and osds |
OK, assign the other issue to me unless it's a blocker for you, I would On 5 Nov 2015, at 16:34, Madhav Puri wrote:
|
cool thanks Erik. I have created #22 to track this. It is not a blocker for me right now as in the cluster's vagrant setup (i.e. using the Vagrantfile) I am able to provision all the cluster nodes at once. I will need it once I start integrating the new playbooks with the individual node commission/decommission logic in cluster manager, which I am thinking can be tackled after docker con rush is over. I will merge this one now in a bit. |
ansible: add roles for contiv storage service
oops looks like I forgot to update the diffs before merge :( ... I will revert this change and create a new PR with the updated change in a bit. |
this shall start ceph and volmaster/supervisor/plugin service in the cluster.
@erikh PTAL when you get a chance