Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ansible: add roles for contiv storage service #20

Merged
merged 1 commit into from
Nov 6, 2015
Merged

ansible: add roles for contiv storage service #20

merged 1 commit into from
Nov 6, 2015

Conversation

mapuri
Copy link
Contributor

@mapuri mapuri commented Nov 3, 2015

this shall start ceph and volmaster/supervisor/plugin service in the cluster.

@erikh PTAL when you get a chance

- this shall start ceph and volmaster/supervisor/plugin service

Signed-off-by: Madhav Puri <madhav.puri@gmail.com>
@@ -0,0 +1 @@
VOLSUPERVISOR_ARGS="--debug"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend leaving this blank; volsupervisor is extremely chatty in debug mode.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, will do.

@erikh
Copy link
Contributor

erikh commented Nov 4, 2015

ok, one last thing:

ceph mons are more like etcd than block storage, they are a consensus database. The OSDs should be numerous, but the mons should not, and be an odd number like zookeeper/etc. I think we should distribute a volmaster with each mon, and point other docker volplugin instances at those.

@shaleman might have some strong opinions on the architecture here too.

Here's some more info: http://docs.ceph.com/docs/v0.71/rados/configuration/mon-config-ref/

@mapuri
Copy link
Contributor Author

mapuri commented Nov 4, 2015

@erikh

ceph mons are more like etcd than block storage, they are a consensus database. The OSDs should be numerous, but the mons should not, and be an odd number like zookeeper/etc. I think we should distribute a volmaster with each mon, and point other docker volplugin instances at those.

I see, so I think I will make the change to run ceph mon and osds on service-master hosts and just ceph-osds on service-worker hosts.

For etcd, I have seen it work ok with 2 nodes but I agree for consensus usually we need odd number of nodes and we should be able to achieve that by having odd number of hosts in service-master group once we are able to support it as I described in my comment above.

@erikh
Copy link
Contributor

erikh commented Nov 4, 2015

That sounds like a great plan. Do you want to merge this as-is or wait until this work is done?

@mapuri
Copy link
Contributor Author

mapuri commented Nov 4, 2015

cool...yeah, I will update the diffs to incorporate the comment for removing the --debug flag for volsupervisor and running just ceph osds on service-worker.

Will update the diffs in a bit and ask for you to take one more look.

@mapuri
Copy link
Contributor Author

mapuri commented Nov 6, 2015

@erikh this should be ready for another look.

I took care of the comments or removing the --debug flag for volsupervisor and running just ceph osds on service-worker.

Right now it comes with a caveat that the ceph mons and osds related host-groups need to be provisioned together in same run of the ansible playbook. I will open an issue to enhance ceph playbook for allowing incremental configuration of mons and osds

@erikh
Copy link
Contributor

erikh commented Nov 6, 2015

OK, assign the other issue to me unless it's a blocker for you, I would
like to tackle this.

On 5 Nov 2015, at 16:34, Madhav Puri wrote:

@erikh this should be ready for another look.

I took care of the comments or removing the --debug flag for
volsupervisor and running just ceph osds on service-worker.

Right now it comes with a caveat that the ceph mons and osds related
host-groups need to be provisioned together in same run of the ansible
playbook. I will open an issue to enhance ceph playbook for allowing
incremental configuration of mons and osds


Reply to this email directly or view it on GitHub:
#20 (comment)

@mapuri
Copy link
Contributor Author

mapuri commented Nov 6, 2015

cool thanks Erik. I have created #22 to track this. It is not a blocker for me right now as in the cluster's vagrant setup (i.e. using the Vagrantfile) I am able to provision all the cluster nodes at once.

I will need it once I start integrating the new playbooks with the individual node commission/decommission logic in cluster manager, which I am thinking can be tackled after docker con rush is over.

I will merge this one now in a bit.

mapuri added a commit that referenced this pull request Nov 6, 2015
ansible: add roles for contiv storage service
@mapuri mapuri merged commit 7cf9a9c into contiv:master Nov 6, 2015
@mapuri mapuri deleted the volplugin branch November 6, 2015 01:15
@mapuri
Copy link
Contributor Author

mapuri commented Nov 6, 2015

oops looks like I forgot to update the diffs before merge :( ... I will revert this change and create a new PR with the updated change in a bit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants