Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ceph-disk: ability to use a different cluster name with dmcrypt #11786

Merged
2 commits merged into from Feb 17, 2017

Conversation

leseb
Copy link
Member

@leseb leseb commented Nov 4, 2016

Prior to this commit we were not able to configure an OSD using dmcrypt
on a cluster with a different name than 'ceph'. Adding the command line
option to --cluster to fix this.

Signed-off-by: Sébastien Han seb@redhat.com

@@ -2377,13 +2379,14 @@ def set_or_create_partition(self):
self.args.lockbox)
self.partition = self.create_partition()

def create_key(self):
def create_key(self, cluster):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is no need for requiring cluster here, you can rely on self.args since that will have the arguments passed in:

self.args.cluster

key_size = CryptHelpers.get_dmcrypt_keysize(self.args)
key = open('/dev/urandom', 'rb').read(key_size / 8)
base64_key = base64.b64encode(key)
command_check_call(
[
'ceph',
'--cluster', cluster,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you would be able to just use that self.args.cluster in here, which already defaults to ceph if not explicitly defined

@leseb leseb force-pushed the dmcrypt-cluster-name branch 2 times, most recently from 8b95a96 to 1edc9f8 Compare November 4, 2016 16:06
@liewegas liewegas added this to the kraken milestone Nov 4, 2016
@ktdreyer
Copy link
Member

ktdreyer commented Nov 4, 2016

Would you please file a ticket at http://tracker.ceph.com for this, and add the URL of the ticket to the commit log, so we can track this and ensure it is backported to Jewel eventually?

@leseb
Copy link
Member Author

leseb commented Nov 8, 2016

@leseb
Copy link
Member Author

leseb commented Nov 8, 2016

I'll update the PR with a definitive fix today.

@leseb leseb force-pushed the dmcrypt-cluster-name branch 2 times, most recently from 6ba659e to 2a75124 Compare November 9, 2016 16:24
@tchaikov
Copy link
Contributor

tchaikov commented Nov 14, 2016

@leseb could you put the line of Fixes: http://tracker.ceph.com/issues/17821 into your commit message? right before your "SIgned-off-by" line, see https://github.com/ceph/ceph/blob/master/SubmittingPatches.rst#tag-the-commit .

@leseb
Copy link
Member Author

leseb commented Nov 14, 2016

@tchaikov just did, thanks!
I still have some work to do on that patch.

@ktdreyer ktdreyer changed the title ceph-disk: ability to use a different cluster name with dmcrypt DNM: ceph-disk: ability to use a different cluster name with dmcrypt Nov 14, 2016
@ktdreyer
Copy link
Member

Please remove "DNM" from the title when this PR is ready for review and merging.

@leseb leseb changed the title DNM: ceph-disk: ability to use a different cluster name with dmcrypt ceph-disk: ability to use a different cluster name with dmcrypt Nov 15, 2016
@leseb
Copy link
Member Author

leseb commented Nov 15, 2016

@ErwanAliasr1 and I reworked the patch, ready for review now :)

@ErwanAliasr1
Copy link
Contributor

WDYT about this patch series guys ?

@ktdreyer
Copy link
Member

Would you please rebase this onto the latest master to ensure that Jenkins is happy with it?

@leseb
Copy link
Member Author

leseb commented Nov 16, 2016

@ktdreyer done

@ghost
Copy link

ghost commented Nov 17, 2016

this needs rebasing (and the test fails)

metavar='NAME',
default='ceph',
help='ceph cluster name (default: ceph)',
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Activate does not have a --cluster argument because it can be determined with the find_cluster_by_uuid() function

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, let me try again without this, I forgot the error that made us think we needed to do this.

@leseb
Copy link
Member Author

leseb commented Nov 17, 2016

Ok testing is blocked by #12033 waiting for the backport to re-validate.

@liewegas liewegas removed this from the kraken milestone Nov 28, 2016
@leseb
Copy link
Member Author

leseb commented Feb 8, 2017

@dachary I don't have permission to push a branch to that repo, that's why I created a PR...

@ghost
Copy link

ghost commented Feb 8, 2017

@leseb you have permission to push, all ceph developers have permission and you're in the group, it must be something on your side that's not configured right.

@leseb
Copy link
Member Author

leseb commented Feb 8, 2017

@dachary please see:

$ git remote -v |grep ceph-cii
ceph-cii        git@github.com:ceph/ceph-ci.git (push)
ceph-cii        git@github.com:ceph/ceph-ci.git (fetch)
$ git push ceph-cii dmcrypt-cluster-name
ERROR: Permission to ceph/ceph-ci.git denied to leseb.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

Am I doing something wrong?

@ghost
Copy link

ghost commented Feb 8, 2017

I don't see what's wrong. Here are the permissions you have:

seb

I guess someone with better knowledge of github subtle settings can help out :-)

@leseb
Copy link
Member Author

leseb commented Feb 8, 2017

@dachary seems to be working now, branch pushed :)

@ghost
Copy link

ghost commented Feb 8, 2017

@leseb nice :-) What was the problem ? Do you have access to the sepia lab ?

@leseb
Copy link
Member Author

leseb commented Feb 8, 2017

@dachary well looks like I wasn't part of Ceph team, then I got an email saying you added me and it was good :).

@ghost
Copy link

ghost commented Feb 8, 2017

well looks like I wasn't part of Ceph team, then I got an email saying you added me and it was good :).

Oo amazing. I was looking to verify if you were listed and mistook the "add" box for the "search" box... happy ending :-) Welcome to the Ceph team, always nice to have new members ...

@yuriw
Copy link
Contributor

yuriw commented Feb 16, 2017

@ghost ghost merged commit 057e7b1 into ceph:master Feb 17, 2017
@neurodrone
Copy link
Contributor

Will this be backported to Jewel by any chance?

@vumrao
Copy link
Contributor

vumrao commented Feb 17, 2017

@neurodrone this is the tracker - http://tracker.ceph.com/issues/17821, it was marked backport to jewel and I have changed it to pending backport. It will get backported to jewel.

ganeshmaharaj pushed a commit to ganeshmaharaj/ceph-deploy that referenced this pull request Feb 20, 2017
With recent changes in ceph (ceph/ceph#11786),
this change will allow ceph-deploy osd activate to complete without
errors. Follow-on fix for http://tracker.ceph.com/issues/17821

Signed-off-by: Ganesh Mahalingam <ganesh.mahalingam@intel.com>
@vasukulkarni
Copy link
Contributor

@dachary it would have been nice to default the cluster to 'ceph' here, we got a patch for ceph-deploy to add the cluster option there, Also would be nice to run ceph-deploy suite for ceph-disk changes as well.

@ganeshmaharaj
Copy link
Contributor

@dachary @vasukulkarni Should have updated both threads. #13527 is the patch submitted to ceph-disk to add the default.

@ghost
Copy link

ghost commented Feb 21, 2017

@leseb I was wrong merging this pull request. I misread #11786 (comment) and thought the tests were successfull. Instead they failed and as a result ceph-disk is now broken in master. Could you please fix this asap ?

@leseb
Copy link
Member Author

leseb commented Feb 21, 2017

@dachary I originally proposed the same approach as in #13527 so we should merge it.

@ghost
Copy link

ghost commented Feb 21, 2017

@leseb of course, entirely my fault, please accept my apologies

@ganeshmaharaj
Copy link
Contributor

ganeshmaharaj commented Feb 21, 2017

@dachary @leseb Thanks for the update. i believe with this,ceph/ceph-deploy#430 will be needed for ceph-deploy with work. @vasukulkarni

@leseb
Copy link
Member Author

leseb commented Feb 21, 2017

@dachary No worries :)

@ChristinaMeno
Copy link
Contributor

@leseb @dachary Would you please help me understand the ceph-disk suite failure here?
cheers,
G

@ghost
Copy link

ghost commented Feb 22, 2017

@GregMeno ceph-disk activate does not have a --cluster argument but the function implementing it expects it. It does not show in the ceph-disk suite output because the part that fails is run via either systemd or udev.

This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet